Dec 04 06:09:03 crc systemd[1]: Starting Kubernetes Kubelet... Dec 04 06:09:03 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:03 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 06:09:04 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 06:09:04 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 04 06:09:04 crc kubenswrapper[4832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 04 06:09:04 crc kubenswrapper[4832]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 04 06:09:04 crc kubenswrapper[4832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 04 06:09:04 crc kubenswrapper[4832]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 04 06:09:04 crc kubenswrapper[4832]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 04 06:09:04 crc kubenswrapper[4832]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.512309 4832 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515411 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515428 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515433 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515436 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515440 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515445 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515448 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515452 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515456 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515460 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515464 4832 feature_gate.go:330] unrecognized feature gate: Example Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515468 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515472 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515475 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515480 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515484 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515488 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515496 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515501 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515505 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515510 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515514 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515518 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515522 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515525 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515529 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515532 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515536 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515539 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515543 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515546 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515550 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515553 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515557 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515560 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515564 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515568 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515572 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515575 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515579 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515582 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515586 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515590 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515593 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515596 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515600 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515603 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515607 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515611 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515615 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515618 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515622 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515626 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515634 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515638 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515643 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515648 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515652 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515657 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515661 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515665 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515668 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515672 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515675 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515679 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515682 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515685 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515689 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515697 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515701 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.515704 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515785 4832 flags.go:64] FLAG: --address="0.0.0.0" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515794 4832 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515804 4832 flags.go:64] FLAG: --anonymous-auth="true" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515811 4832 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515817 4832 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515821 4832 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515826 4832 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515832 4832 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515836 4832 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515840 4832 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515844 4832 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515849 4832 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515853 4832 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515857 4832 flags.go:64] FLAG: --cgroup-root="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515862 4832 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515867 4832 flags.go:64] FLAG: --client-ca-file="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515871 4832 flags.go:64] FLAG: --cloud-config="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515876 4832 flags.go:64] FLAG: --cloud-provider="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515880 4832 flags.go:64] FLAG: --cluster-dns="[]" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515899 4832 flags.go:64] FLAG: --cluster-domain="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515903 4832 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515907 4832 flags.go:64] FLAG: --config-dir="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515911 4832 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515916 4832 flags.go:64] FLAG: --container-log-max-files="5" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515921 4832 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515925 4832 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515929 4832 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515933 4832 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515937 4832 flags.go:64] FLAG: --contention-profiling="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515941 4832 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515945 4832 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515981 4832 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515985 4832 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515991 4832 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515995 4832 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.515999 4832 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516003 4832 flags.go:64] FLAG: --enable-load-reader="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516007 4832 flags.go:64] FLAG: --enable-server="true" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516011 4832 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516019 4832 flags.go:64] FLAG: --event-burst="100" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516023 4832 flags.go:64] FLAG: --event-qps="50" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516027 4832 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516031 4832 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516034 4832 flags.go:64] FLAG: --eviction-hard="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516040 4832 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516043 4832 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516049 4832 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516053 4832 flags.go:64] FLAG: --eviction-soft="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516057 4832 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516061 4832 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516065 4832 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516069 4832 flags.go:64] FLAG: --experimental-mounter-path="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516074 4832 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516078 4832 flags.go:64] FLAG: --fail-swap-on="true" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516082 4832 flags.go:64] FLAG: --feature-gates="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516087 4832 flags.go:64] FLAG: --file-check-frequency="20s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516090 4832 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516095 4832 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516099 4832 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516103 4832 flags.go:64] FLAG: --healthz-port="10248" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516107 4832 flags.go:64] FLAG: --help="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516111 4832 flags.go:64] FLAG: --hostname-override="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516115 4832 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516119 4832 flags.go:64] FLAG: --http-check-frequency="20s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516123 4832 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516128 4832 flags.go:64] FLAG: --image-credential-provider-config="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516133 4832 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516143 4832 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516147 4832 flags.go:64] FLAG: --image-service-endpoint="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516151 4832 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516155 4832 flags.go:64] FLAG: --kube-api-burst="100" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516160 4832 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516164 4832 flags.go:64] FLAG: --kube-api-qps="50" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516168 4832 flags.go:64] FLAG: --kube-reserved="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516172 4832 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516176 4832 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516180 4832 flags.go:64] FLAG: --kubelet-cgroups="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516184 4832 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516189 4832 flags.go:64] FLAG: --lock-file="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516193 4832 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516197 4832 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516201 4832 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516207 4832 flags.go:64] FLAG: --log-json-split-stream="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516211 4832 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516215 4832 flags.go:64] FLAG: --log-text-split-stream="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516220 4832 flags.go:64] FLAG: --logging-format="text" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516223 4832 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516228 4832 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516232 4832 flags.go:64] FLAG: --manifest-url="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516235 4832 flags.go:64] FLAG: --manifest-url-header="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516241 4832 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516246 4832 flags.go:64] FLAG: --max-open-files="1000000" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516251 4832 flags.go:64] FLAG: --max-pods="110" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516254 4832 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516258 4832 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516263 4832 flags.go:64] FLAG: --memory-manager-policy="None" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516267 4832 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516271 4832 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516274 4832 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516279 4832 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516288 4832 flags.go:64] FLAG: --node-status-max-images="50" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516293 4832 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516296 4832 flags.go:64] FLAG: --oom-score-adj="-999" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516307 4832 flags.go:64] FLAG: --pod-cidr="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516311 4832 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516320 4832 flags.go:64] FLAG: --pod-manifest-path="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516324 4832 flags.go:64] FLAG: --pod-max-pids="-1" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516329 4832 flags.go:64] FLAG: --pods-per-core="0" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516333 4832 flags.go:64] FLAG: --port="10250" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516337 4832 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516342 4832 flags.go:64] FLAG: --provider-id="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516346 4832 flags.go:64] FLAG: --qos-reserved="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516350 4832 flags.go:64] FLAG: --read-only-port="10255" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516353 4832 flags.go:64] FLAG: --register-node="true" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516357 4832 flags.go:64] FLAG: --register-schedulable="true" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516361 4832 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516372 4832 flags.go:64] FLAG: --registry-burst="10" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516376 4832 flags.go:64] FLAG: --registry-qps="5" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516380 4832 flags.go:64] FLAG: --reserved-cpus="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516399 4832 flags.go:64] FLAG: --reserved-memory="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516404 4832 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516409 4832 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516413 4832 flags.go:64] FLAG: --rotate-certificates="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516416 4832 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516420 4832 flags.go:64] FLAG: --runonce="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516424 4832 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516428 4832 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516433 4832 flags.go:64] FLAG: --seccomp-default="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516436 4832 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516441 4832 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516445 4832 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516448 4832 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516453 4832 flags.go:64] FLAG: --storage-driver-password="root" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516457 4832 flags.go:64] FLAG: --storage-driver-secure="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516460 4832 flags.go:64] FLAG: --storage-driver-table="stats" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516464 4832 flags.go:64] FLAG: --storage-driver-user="root" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516468 4832 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516473 4832 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516477 4832 flags.go:64] FLAG: --system-cgroups="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516491 4832 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516498 4832 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516502 4832 flags.go:64] FLAG: --tls-cert-file="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516506 4832 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516514 4832 flags.go:64] FLAG: --tls-min-version="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516519 4832 flags.go:64] FLAG: --tls-private-key-file="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516522 4832 flags.go:64] FLAG: --topology-manager-policy="none" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516526 4832 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516531 4832 flags.go:64] FLAG: --topology-manager-scope="container" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516535 4832 flags.go:64] FLAG: --v="2" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516541 4832 flags.go:64] FLAG: --version="false" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516548 4832 flags.go:64] FLAG: --vmodule="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516555 4832 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.516560 4832 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516851 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516862 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516866 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516870 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516874 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516888 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516897 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516901 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516905 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516909 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516914 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516918 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516923 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516927 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516932 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516937 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516941 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516945 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516955 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516959 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516963 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.516966 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517049 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517053 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517057 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517060 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517064 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517068 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517072 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517076 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517600 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517648 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517656 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517664 4832 feature_gate.go:330] unrecognized feature gate: Example Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517671 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517679 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517685 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517692 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517698 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517703 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517709 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517717 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517738 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517745 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517751 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517759 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517765 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517772 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517788 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517793 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517800 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517805 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517811 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.517817 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518006 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518026 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518032 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518038 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518043 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518048 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518053 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518059 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518065 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518072 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518078 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518083 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518089 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518094 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518106 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518113 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.518119 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.518131 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.534025 4832 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.534078 4832 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534234 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534248 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534256 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534265 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534274 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534282 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534290 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534298 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534306 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534314 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534321 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534330 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534338 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534347 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534355 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534363 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534374 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534387 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534423 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534432 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534441 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534450 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534459 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534466 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534474 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534483 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534492 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534503 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534511 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534519 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534527 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534535 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534543 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534550 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534561 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534571 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534579 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534620 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534628 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534637 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534647 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534657 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534667 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534675 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534683 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534691 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534699 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534708 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534716 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534724 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534731 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534739 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534746 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534755 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534762 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534770 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534779 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534786 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534794 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534802 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534809 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534820 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534830 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534838 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534846 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534853 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534861 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534869 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534877 4832 feature_gate.go:330] unrecognized feature gate: Example Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534885 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.534892 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.534906 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535146 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535157 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535166 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535174 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535181 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535189 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535197 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535205 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535213 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535221 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535229 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535238 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535246 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535256 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535266 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535275 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535286 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535296 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535305 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535314 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535322 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535330 4832 feature_gate.go:330] unrecognized feature gate: Example Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535340 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535348 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535358 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535368 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535377 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535411 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535419 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535427 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535436 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535444 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535452 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535460 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535468 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535476 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535485 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535494 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535505 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535515 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535524 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535532 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535540 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535548 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535555 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535566 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535575 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535584 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535594 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535602 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535610 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535618 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535626 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535634 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535642 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535649 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535657 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535665 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535674 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535681 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535690 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535697 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535705 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535713 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535721 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535729 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535736 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535744 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535752 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535760 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.535768 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.535779 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.536328 4832 server.go:940] "Client rotation is on, will bootstrap in background" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.540739 4832 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.540874 4832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.541727 4832 server.go:997] "Starting client certificate rotation" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.541772 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.542084 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-06 12:27:36.254893217 +0000 UTC Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.542342 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 54h18m31.712559741s for next certificate rotation Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.554493 4832 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.557565 4832 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.571554 4832 log.go:25] "Validated CRI v1 runtime API" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.595862 4832 log.go:25] "Validated CRI v1 image API" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.599039 4832 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.603327 4832 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-04-06-04-31-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.603437 4832 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.637498 4832 manager.go:217] Machine: {Timestamp:2025-12-04 06:09:04.635092932 +0000 UTC m=+0.247910688 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13 BootID:897682a6-bffb-4874-9d5a-2be09a040e0d Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:02:9a:f6 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:02:9a:f6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:6a:bb:31 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:9e:e0:69 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:32:55:e8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:97:1a:4a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d2:5d:65:00:96:f3 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:42:4a:fd:20:ae:2c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.638087 4832 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.638682 4832 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.639636 4832 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.640007 4832 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.640079 4832 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.640488 4832 topology_manager.go:138] "Creating topology manager with none policy" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.640517 4832 container_manager_linux.go:303] "Creating device plugin manager" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.640879 4832 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.640958 4832 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.641492 4832 state_mem.go:36] "Initialized new in-memory state store" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.641675 4832 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.642941 4832 kubelet.go:418] "Attempting to sync node with API server" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.642993 4832 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.643041 4832 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.643072 4832 kubelet.go:324] "Adding apiserver pod source" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.643095 4832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.645532 4832 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.645999 4832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.646177 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.646260 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 04 06:09:04 crc kubenswrapper[4832]: E1204 06:09:04.646616 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Dec 04 06:09:04 crc kubenswrapper[4832]: E1204 06:09:04.646607 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.647468 4832 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648145 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648184 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648196 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648206 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648224 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648236 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648247 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648264 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648277 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648289 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648305 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648334 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.648581 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.649144 4832 server.go:1280] "Started kubelet" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.651097 4832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.651414 4832 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.651566 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 04 06:09:04 crc systemd[1]: Started Kubernetes Kubelet. Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.652067 4832 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 04 06:09:04 crc kubenswrapper[4832]: E1204 06:09:04.653550 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187dee35fd479ff6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-04 06:09:04.649109494 +0000 UTC m=+0.261927220,LastTimestamp:2025-12-04 06:09:04.649109494 +0000 UTC m=+0.261927220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.655167 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.655214 4832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.655527 4832 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.655554 4832 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.655571 4832 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.655671 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:21:15.294839986 +0000 UTC Dec 04 06:09:04 crc kubenswrapper[4832]: E1204 06:09:04.655763 4832 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.656165 4832 server.go:460] "Adding debug handlers to kubelet server" Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.656621 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 04 06:09:04 crc kubenswrapper[4832]: E1204 06:09:04.656798 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Dec 04 06:09:04 crc kubenswrapper[4832]: E1204 06:09:04.656713 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.659791 4832 factory.go:55] Registering systemd factory Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.659834 4832 factory.go:221] Registration of the systemd container factory successfully Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.660615 4832 factory.go:153] Registering CRI-O factory Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.660651 4832 factory.go:221] Registration of the crio container factory successfully Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.660751 4832 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.660833 4832 factory.go:103] Registering Raw factory Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.660865 4832 manager.go:1196] Started watching for new ooms in manager Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.661689 4832 manager.go:319] Starting recovery of all containers Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671682 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671743 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671756 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671768 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671780 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671792 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671803 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671815 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671827 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671839 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671850 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671861 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671872 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671911 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671925 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671937 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671949 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671961 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671972 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671983 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.671996 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672007 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672018 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672029 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672042 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672076 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672095 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672130 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672142 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672885 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672915 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672927 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672942 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672950 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672959 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672967 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672978 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672987 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.672997 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673006 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673017 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673026 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673035 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673043 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673052 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673060 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673069 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673077 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673086 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673095 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673104 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673113 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673126 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673136 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673146 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673156 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673166 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673175 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673184 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673192 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673201 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673209 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673217 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673225 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673234 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673241 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673251 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673259 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673267 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673277 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673298 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673307 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673315 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673323 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673331 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673339 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673347 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673369 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673381 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673404 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673416 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673424 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673432 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673441 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673450 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673458 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673468 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673476 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673559 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673568 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673580 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673592 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673606 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673617 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673627 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673671 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673686 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673700 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673713 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673727 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673743 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673756 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673768 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673780 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673794 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.673809 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.674366 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.674405 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.674420 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.674432 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.674445 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.674456 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.674468 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.674478 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.676717 4832 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.676815 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.676843 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.676859 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.676942 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.676964 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.676986 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677005 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677457 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677492 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677509 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677522 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677536 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677590 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677606 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677619 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677632 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677645 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677661 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677675 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677688 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677700 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677712 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677726 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677748 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677761 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677774 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677787 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677802 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677814 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.677828 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678207 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678241 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678279 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678303 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678325 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678360 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678378 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678431 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678451 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678471 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678498 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678522 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678546 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678562 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678576 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678596 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.678610 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679052 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679078 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679095 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679117 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679132 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679154 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679169 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679185 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679208 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679223 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679246 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679261 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679276 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679296 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679312 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679331 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679346 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679359 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679380 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679413 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679434 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679449 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679485 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679504 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679519 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679538 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679555 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679570 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679588 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679601 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679620 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679639 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679653 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679671 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679685 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679700 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679717 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679733 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679752 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679767 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679784 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679804 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679818 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679835 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679849 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679863 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679877 4832 reconstruct.go:97] "Volume reconstruction finished" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.679887 4832 reconciler.go:26] "Reconciler: start to sync state" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.684799 4832 manager.go:324] Recovery completed Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.697449 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.699119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.699235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.699304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.701266 4832 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.701283 4832 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.701323 4832 state_mem.go:36] "Initialized new in-memory state store" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.707803 4832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.709195 4832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.709256 4832 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.709296 4832 kubelet.go:2335] "Starting kubelet main sync loop" Dec 04 06:09:04 crc kubenswrapper[4832]: E1204 06:09:04.709369 4832 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 04 06:09:04 crc kubenswrapper[4832]: W1204 06:09:04.710331 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 04 06:09:04 crc kubenswrapper[4832]: E1204 06:09:04.710399 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.712094 4832 policy_none.go:49] "None policy: Start" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.713302 4832 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.713341 4832 state_mem.go:35] "Initializing new in-memory state store" Dec 04 06:09:04 crc kubenswrapper[4832]: E1204 06:09:04.755958 4832 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.770375 4832 manager.go:334] "Starting Device Plugin manager" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.770454 4832 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.770466 4832 server.go:79] "Starting device plugin registration server" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.770830 4832 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.770846 4832 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.771005 4832 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.771131 4832 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.771138 4832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 04 06:09:04 crc kubenswrapper[4832]: E1204 06:09:04.777713 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.810312 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.810450 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.811380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.811436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.811447 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.811590 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.811910 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.811979 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.812280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.812305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.812313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.812427 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.812640 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.812664 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.812837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.812879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.812891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.812986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.812999 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.813007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.813085 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.813122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.813139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.813147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.813170 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.813199 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.813987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.814013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.814020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.814244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.814269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.814279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.814423 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.814471 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.814496 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.815167 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.815190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.815201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.815325 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.815351 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.815338 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.815375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.815384 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.816038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.816057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.816067 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:04 crc kubenswrapper[4832]: E1204 06:09:04.857676 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.872426 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.873419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.873459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.873471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.873495 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 06:09:04 crc kubenswrapper[4832]: E1204 06:09:04.873932 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.882149 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.882176 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.882271 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.882968 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.883031 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.883061 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.883087 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.883116 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.883145 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.883171 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.883196 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.883223 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.883258 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.883281 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.883304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984292 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984356 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984375 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984416 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984438 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984454 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984469 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984490 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984505 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984522 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984538 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984553 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984547 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984580 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984558 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984612 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984623 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984668 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984667 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984699 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984732 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984740 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984757 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984762 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984789 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984568 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984832 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984848 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984860 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:04 crc kubenswrapper[4832]: I1204 06:09:04.984935 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.075138 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.076866 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.076899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.076908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.076928 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 06:09:05 crc kubenswrapper[4832]: E1204 06:09:05.077481 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.157833 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 06:09:05 crc kubenswrapper[4832]: W1204 06:09:05.186487 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7b65b8f704a14db5de54062307b4d8a13084c69e15198c475122dc6bea78a5b5 WatchSource:0}: Error finding container 7b65b8f704a14db5de54062307b4d8a13084c69e15198c475122dc6bea78a5b5: Status 404 returned error can't find the container with id 7b65b8f704a14db5de54062307b4d8a13084c69e15198c475122dc6bea78a5b5 Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.188318 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.195731 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:05 crc kubenswrapper[4832]: W1204 06:09:05.203787 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b5878b8f3f95eb5ffca310101a82bad054a2a799585117acfa2f73650494b267 WatchSource:0}: Error finding container b5878b8f3f95eb5ffca310101a82bad054a2a799585117acfa2f73650494b267: Status 404 returned error can't find the container with id b5878b8f3f95eb5ffca310101a82bad054a2a799585117acfa2f73650494b267 Dec 04 06:09:05 crc kubenswrapper[4832]: W1204 06:09:05.215493 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a176e53ba7f74360768afc916b7ec1b009ff4f4e6e4cf9316b834cb12e342a00 WatchSource:0}: Error finding container a176e53ba7f74360768afc916b7ec1b009ff4f4e6e4cf9316b834cb12e342a00: Status 404 returned error can't find the container with id a176e53ba7f74360768afc916b7ec1b009ff4f4e6e4cf9316b834cb12e342a00 Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.217351 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.224761 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 06:09:05 crc kubenswrapper[4832]: W1204 06:09:05.233276 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-756716f230f69a9e2d15a888b1d043f65658dbdf1b59de4a5443e1a87824efa7 WatchSource:0}: Error finding container 756716f230f69a9e2d15a888b1d043f65658dbdf1b59de4a5443e1a87824efa7: Status 404 returned error can't find the container with id 756716f230f69a9e2d15a888b1d043f65658dbdf1b59de4a5443e1a87824efa7 Dec 04 06:09:05 crc kubenswrapper[4832]: W1204 06:09:05.240879 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-d6ae7680e2534b2f71144519d4c781773b2e3038fa710c95267c08b1a7fdac82 WatchSource:0}: Error finding container d6ae7680e2534b2f71144519d4c781773b2e3038fa710c95267c08b1a7fdac82: Status 404 returned error can't find the container with id d6ae7680e2534b2f71144519d4c781773b2e3038fa710c95267c08b1a7fdac82 Dec 04 06:09:05 crc kubenswrapper[4832]: E1204 06:09:05.258772 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.478032 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.479980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.480031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.480040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.480065 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 06:09:05 crc kubenswrapper[4832]: E1204 06:09:05.480648 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.654071 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.656254 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:24:58.766246039 +0000 UTC Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.718209 4832 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891" exitCode=0 Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.718339 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891"} Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.718580 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7b65b8f704a14db5de54062307b4d8a13084c69e15198c475122dc6bea78a5b5"} Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.718737 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.720420 4832 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4" exitCode=0 Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.720520 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4"} Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.720590 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d6ae7680e2534b2f71144519d4c781773b2e3038fa710c95267c08b1a7fdac82"} Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.720607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.720629 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.720658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.720766 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.721846 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce"} Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.721881 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"756716f230f69a9e2d15a888b1d043f65658dbdf1b59de4a5443e1a87824efa7"} Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.722036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.722102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.722128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.723202 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3" exitCode=0 Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.723260 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3"} Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.723281 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a176e53ba7f74360768afc916b7ec1b009ff4f4e6e4cf9316b834cb12e342a00"} Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.723456 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.724211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.724234 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.724248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.725651 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.725677 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="62ae52b037a2ad0268475de618763edcd93931b07745e018702784fe0fafd474" exitCode=0 Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.725706 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"62ae52b037a2ad0268475de618763edcd93931b07745e018702784fe0fafd474"} Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.725725 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b5878b8f3f95eb5ffca310101a82bad054a2a799585117acfa2f73650494b267"} Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.725842 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.726357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.726380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.726429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.726650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.726675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:05 crc kubenswrapper[4832]: I1204 06:09:05.726687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:05 crc kubenswrapper[4832]: W1204 06:09:05.827796 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 04 06:09:05 crc kubenswrapper[4832]: E1204 06:09:05.827882 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Dec 04 06:09:05 crc kubenswrapper[4832]: W1204 06:09:05.966211 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 04 06:09:05 crc kubenswrapper[4832]: E1204 06:09:05.966291 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Dec 04 06:09:05 crc kubenswrapper[4832]: E1204 06:09:05.983235 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187dee35fd479ff6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-04 06:09:04.649109494 +0000 UTC m=+0.261927220,LastTimestamp:2025-12-04 06:09:04.649109494 +0000 UTC m=+0.261927220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 04 06:09:06 crc kubenswrapper[4832]: W1204 06:09:06.047648 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 04 06:09:06 crc kubenswrapper[4832]: E1204 06:09:06.047726 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Dec 04 06:09:06 crc kubenswrapper[4832]: E1204 06:09:06.060786 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Dec 04 06:09:06 crc kubenswrapper[4832]: W1204 06:09:06.149883 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Dec 04 06:09:06 crc kubenswrapper[4832]: E1204 06:09:06.150042 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.281203 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.283665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.283716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.283729 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.283757 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.656672 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:38:38.168829654 +0000 UTC Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.732377 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.732452 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.732467 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.732482 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.732496 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.732692 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.733691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.733722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.733732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.734735 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cc6415dd24759d5081f5e7691f20d11050bbd12a03a72211b492e46616d667e7" exitCode=0 Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.734796 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cc6415dd24759d5081f5e7691f20d11050bbd12a03a72211b492e46616d667e7"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.734945 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.735977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.736046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.736063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.736806 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"046b6ea0354dfc27fc4272b096cc92020bfbd087497902772eb0d352e62959ea"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.736886 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.737583 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.737632 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.737645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.740329 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9380c3f65d93675e7598bcaa6c7364057e34c7828e2898e46a03c5d0b309fddf"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.740419 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"edd87caf81f133869c458e71c3c881af074e53afbb3b01e97fa3efd0002077c7"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.740436 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"eb7fc2ab450dc15d6e870ca441f100aedec9bbc8cf5085a4448eb361a2bd7971"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.740595 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.742015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.742041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.742055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.743180 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.743216 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.743231 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119"} Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.743253 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.743945 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.744001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.744023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:06 crc kubenswrapper[4832]: I1204 06:09:06.895831 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.500832 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.657575 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:20:37.71680236 +0000 UTC Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.657647 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 931h11m30.059160966s for next certificate rotation Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.753880 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e087d9a87b876bf49b165727de887f460ca1303721e001e85741827ac7e4ef6a" exitCode=0 Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.753997 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.754031 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.754082 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e087d9a87b876bf49b165727de887f460ca1303721e001e85741827ac7e4ef6a"} Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.754132 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.754314 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.754955 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.755803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.755846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.755859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.755819 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.755952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.755985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.756786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.756875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.756901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.757584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.757606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:07 crc kubenswrapper[4832]: I1204 06:09:07.757620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.759383 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5f3d3c8cba89b44e4bd3a42b50b222ea132b7a56625c885464868646d16da54a"} Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.759455 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c34cd525c37dfe67ce80d6dc6cc72b5f354b759c97850f01dbd4e762697430c5"} Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.759467 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"32cc97f8e7f2e25a5896dcb51ffa9a10cc1fe3962cb8faa8e6411f9faf6e6bab"} Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.759476 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d48082a0b9ff1e65ca123d82ad2fd45d12e80e41640b2a68fc645b9a28598bc0"} Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.759487 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"adb00360ee4c26c3fd3b34d18e1fb4441581b35b4ad5f1215e08ca545855f39b"} Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.759608 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.760450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.760480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.760489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.839913 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.840078 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.841110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.841147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:08 crc kubenswrapper[4832]: I1204 06:09:08.841165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:09 crc kubenswrapper[4832]: I1204 06:09:09.648301 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:09 crc kubenswrapper[4832]: I1204 06:09:09.648488 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 06:09:09 crc kubenswrapper[4832]: I1204 06:09:09.648531 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:09 crc kubenswrapper[4832]: I1204 06:09:09.649804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:09 crc kubenswrapper[4832]: I1204 06:09:09.649843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:09 crc kubenswrapper[4832]: I1204 06:09:09.649855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:09 crc kubenswrapper[4832]: I1204 06:09:09.885671 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 04 06:09:09 crc kubenswrapper[4832]: I1204 06:09:09.885859 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:09 crc kubenswrapper[4832]: I1204 06:09:09.887044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:09 crc kubenswrapper[4832]: I1204 06:09:09.887095 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:09 crc kubenswrapper[4832]: I1204 06:09:09.887106 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:10 crc kubenswrapper[4832]: I1204 06:09:10.176184 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:10 crc kubenswrapper[4832]: I1204 06:09:10.176372 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:10 crc kubenswrapper[4832]: I1204 06:09:10.178129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:10 crc kubenswrapper[4832]: I1204 06:09:10.178227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:10 crc kubenswrapper[4832]: I1204 06:09:10.178259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:11 crc kubenswrapper[4832]: I1204 06:09:11.137730 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 04 06:09:11 crc kubenswrapper[4832]: I1204 06:09:11.138086 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:11 crc kubenswrapper[4832]: I1204 06:09:11.139298 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:11 crc kubenswrapper[4832]: I1204 06:09:11.139334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:11 crc kubenswrapper[4832]: I1204 06:09:11.139345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:14 crc kubenswrapper[4832]: I1204 06:09:14.750853 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:14 crc kubenswrapper[4832]: I1204 06:09:14.751147 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:14 crc kubenswrapper[4832]: I1204 06:09:14.752891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:14 crc kubenswrapper[4832]: I1204 06:09:14.752951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:14 crc kubenswrapper[4832]: I1204 06:09:14.752965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:14 crc kubenswrapper[4832]: E1204 06:09:14.777886 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 04 06:09:15 crc kubenswrapper[4832]: I1204 06:09:15.046995 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:15 crc kubenswrapper[4832]: I1204 06:09:15.047302 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:15 crc kubenswrapper[4832]: I1204 06:09:15.048852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:15 crc kubenswrapper[4832]: I1204 06:09:15.048923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:15 crc kubenswrapper[4832]: I1204 06:09:15.048940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:15 crc kubenswrapper[4832]: I1204 06:09:15.052268 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:15 crc kubenswrapper[4832]: I1204 06:09:15.746360 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:15 crc kubenswrapper[4832]: I1204 06:09:15.782358 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:15 crc kubenswrapper[4832]: I1204 06:09:15.783753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:15 crc kubenswrapper[4832]: I1204 06:09:15.783812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:15 crc kubenswrapper[4832]: I1204 06:09:15.783836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:15 crc kubenswrapper[4832]: I1204 06:09:15.791960 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:16 crc kubenswrapper[4832]: E1204 06:09:16.285028 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 04 06:09:16 crc kubenswrapper[4832]: I1204 06:09:16.654975 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 04 06:09:16 crc kubenswrapper[4832]: I1204 06:09:16.789290 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:16 crc kubenswrapper[4832]: I1204 06:09:16.790283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:16 crc kubenswrapper[4832]: I1204 06:09:16.790324 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:16 crc kubenswrapper[4832]: I1204 06:09:16.790337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.181424 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.181521 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.187707 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.187776 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.506698 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]log ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]etcd ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/generic-apiserver-start-informers ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/priority-and-fairness-filter ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/start-apiextensions-informers ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/start-apiextensions-controllers ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/crd-informer-synced ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/start-system-namespaces-controller ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 04 06:09:17 crc kubenswrapper[4832]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 04 06:09:17 crc kubenswrapper[4832]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/bootstrap-controller ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/start-kube-aggregator-informers ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/apiservice-registration-controller ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/apiservice-discovery-controller ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]autoregister-completion ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/apiservice-openapi-controller ok Dec 04 06:09:17 crc kubenswrapper[4832]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 04 06:09:17 crc kubenswrapper[4832]: livez check failed Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.506754 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.791624 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.792712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.792766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.792785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.886262 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.887471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.887516 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.887529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:17 crc kubenswrapper[4832]: I1204 06:09:17.887556 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 06:09:18 crc kubenswrapper[4832]: I1204 06:09:18.746445 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 04 06:09:18 crc kubenswrapper[4832]: I1204 06:09:18.746528 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 04 06:09:19 crc kubenswrapper[4832]: I1204 06:09:19.910681 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 04 06:09:19 crc kubenswrapper[4832]: I1204 06:09:19.910872 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:19 crc kubenswrapper[4832]: I1204 06:09:19.911862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:19 crc kubenswrapper[4832]: I1204 06:09:19.911893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:19 crc kubenswrapper[4832]: I1204 06:09:19.911915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:19 crc kubenswrapper[4832]: I1204 06:09:19.921434 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 04 06:09:20 crc kubenswrapper[4832]: I1204 06:09:20.799770 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:20 crc kubenswrapper[4832]: I1204 06:09:20.800761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:20 crc kubenswrapper[4832]: I1204 06:09:20.800827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:20 crc kubenswrapper[4832]: I1204 06:09:20.800842 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.179343 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.181013 4832 trace.go:236] Trace[438549965]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Dec-2025 06:09:08.737) (total time: 13443ms): Dec 04 06:09:22 crc kubenswrapper[4832]: Trace[438549965]: ---"Objects listed" error: 13443ms (06:09:22.180) Dec 04 06:09:22 crc kubenswrapper[4832]: Trace[438549965]: [13.44317881s] [13.44317881s] END Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.181037 4832 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.181441 4832 trace.go:236] Trace[667777317]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Dec-2025 06:09:09.017) (total time: 13163ms): Dec 04 06:09:22 crc kubenswrapper[4832]: Trace[667777317]: ---"Objects listed" error: 13163ms (06:09:22.181) Dec 04 06:09:22 crc kubenswrapper[4832]: Trace[667777317]: [13.163398472s] [13.163398472s] END Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.181456 4832 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.181633 4832 trace.go:236] Trace[1316650178]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Dec-2025 06:09:09.227) (total time: 12954ms): Dec 04 06:09:22 crc kubenswrapper[4832]: Trace[1316650178]: ---"Objects listed" error: 12954ms (06:09:22.181) Dec 04 06:09:22 crc kubenswrapper[4832]: Trace[1316650178]: [12.95452692s] [12.95452692s] END Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.181650 4832 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.181941 4832 trace.go:236] Trace[1081134441]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Dec-2025 06:09:08.448) (total time: 13733ms): Dec 04 06:09:22 crc kubenswrapper[4832]: Trace[1081134441]: ---"Objects listed" error: 13733ms (06:09:22.181) Dec 04 06:09:22 crc kubenswrapper[4832]: Trace[1081134441]: [13.733874082s] [13.733874082s] END Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.181955 4832 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.183591 4832 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.420779 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53022->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.420859 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53022->192.168.126.11:17697: read: connection reset by peer" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.506850 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.507479 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.507541 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.511286 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.654838 4832 apiserver.go:52] "Watching apiserver" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.659275 4832 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.659585 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.659905 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.659992 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.660005 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.660043 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.660090 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.660179 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.660250 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.660487 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.660716 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.661484 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.661662 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.661709 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.661746 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.661763 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.661936 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.662074 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.662139 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.662566 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.686368 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.702140 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.721856 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.733124 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.741946 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.754927 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.756329 4832 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.765680 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.775638 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787084 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787132 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787157 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787178 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787198 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787216 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787239 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787258 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787277 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787297 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787315 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787336 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787353 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787372 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787412 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787435 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787460 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787491 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787568 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787592 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787642 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787662 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787684 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787769 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787788 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787805 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787828 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787848 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787867 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787895 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787910 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787925 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787941 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787956 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.787984 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788000 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788015 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788031 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788047 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788063 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788080 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788149 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788167 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788183 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788198 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788214 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788231 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788246 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788263 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788278 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788294 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788312 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788332 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788350 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788367 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788429 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788445 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788461 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788477 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788493 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788510 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788526 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788541 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788652 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788671 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788687 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788703 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788722 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788767 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788786 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788803 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788839 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788854 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788869 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788885 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788902 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788920 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788935 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788955 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788972 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788987 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789004 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789022 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789037 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789051 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789067 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789085 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789103 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789120 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789135 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788087 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789152 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789167 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788138 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788220 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788320 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789192 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788472 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788502 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788585 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788702 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788826 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788820 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789895 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788838 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788663 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789057 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788671 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789091 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789122 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.788354 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.789183 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790062 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790084 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790102 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790120 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790136 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790153 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790170 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790186 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790211 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790232 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790248 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790264 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790281 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790300 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790319 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790337 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790353 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790423 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790450 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790471 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790491 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790507 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790526 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.791853 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792194 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792286 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792470 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792546 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792709 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792792 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792823 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793064 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793155 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793183 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793320 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793347 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793369 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793408 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793438 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793462 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793485 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793508 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793532 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793553 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793577 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793603 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793632 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793652 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793681 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793709 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793785 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793878 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793907 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793930 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793951 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793976 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.794000 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.794021 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790208 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790564 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790520 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.790919 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.791138 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.791469 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.791738 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.795898 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.791848 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.791900 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792135 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792165 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792172 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792326 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792379 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792407 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792952 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792961 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.792984 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793008 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793296 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793491 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793522 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793665 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793762 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.793931 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.794300 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.794344 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.794437 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.794447 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.794680 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.794662 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.794933 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.794049 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.795292 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.795437 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.795720 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796213 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796335 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.795868 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796161 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796307 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796522 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796562 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796528 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796624 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796654 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796660 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796679 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796877 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.796893 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.797544 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.797822 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.797928 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.798121 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.798130 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.798418 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.798577 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.798667 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.798685 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.798873 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.798945 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.799085 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.799436 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.802049 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.802115 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.802141 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.802227 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.802244 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.802420 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.802553 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.802957 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.803271 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.803537 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.803535 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.804014 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.804176 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:09:23.304143802 +0000 UTC m=+18.916961508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.804173 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.804356 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.804423 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.804488 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.805203 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.805255 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.805324 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.805351 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.805544 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.805576 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.805707 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.805959 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.798875 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.806122 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.806134 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.806356 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.806357 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.806476 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.806662 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.806797 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.806875 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.806994 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807000 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807362 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807425 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807572 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807630 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807652 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807669 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807715 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807723 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807741 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807787 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807808 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807823 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807876 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807893 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807908 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807945 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807955 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.807962 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.808044 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.808081 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.808108 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.808134 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.808816 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809002 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809047 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809069 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809107 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809129 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809148 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809185 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809206 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809223 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809242 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809283 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809312 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809360 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809379 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809383 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809422 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809545 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809585 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809619 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809651 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809711 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809745 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809775 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809810 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809895 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809938 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810003 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810039 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810074 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810293 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810324 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810351 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810379 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810463 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810498 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810523 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810672 4832 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810692 4832 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810711 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810725 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810738 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810752 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810764 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810777 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810791 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810811 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810827 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810840 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810853 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810866 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810878 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810891 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810905 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810923 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810936 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810948 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810962 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810974 4832 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810991 4832 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811006 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811021 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811033 4832 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811046 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811109 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811128 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811144 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811158 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811171 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811187 4832 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811201 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811214 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811227 4832 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811240 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811253 4832 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811266 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811279 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811292 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811304 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811316 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811327 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811341 4832 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811354 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811366 4832 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811377 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811411 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811423 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811436 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811451 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811465 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811479 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811494 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811508 4832 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811521 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811534 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811546 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811558 4832 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811575 4832 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811587 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811604 4832 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811618 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811630 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811642 4832 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811654 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811667 4832 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811679 4832 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811691 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811705 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811718 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811731 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811743 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811755 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811767 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811778 4832 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811791 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811803 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811814 4832 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811827 4832 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811840 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811855 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811868 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811885 4832 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811897 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811911 4832 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811924 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811937 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811950 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811963 4832 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811976 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811988 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812002 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812016 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812027 4832 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812042 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812055 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812066 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812079 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812090 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812102 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812114 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812126 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812138 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812150 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812162 4832 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812175 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812187 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812207 4832 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812220 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812233 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812244 4832 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812257 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812269 4832 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812283 4832 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812296 4832 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812311 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812323 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812338 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812352 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.808231 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.813361 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.808329 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.813364 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.808504 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.798948 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.813442 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e" exitCode=255 Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809510 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809822 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809727 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809877 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810078 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810325 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810418 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810589 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810662 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.810809 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811110 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811325 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811488 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.811820 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.813645 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812430 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812440 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.812736 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.798965 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.813063 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.813283 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.809019 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.814023 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.814748 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.814824 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:23.314800745 +0000 UTC m=+18.927618551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.815535 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.816010 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.816055 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.816233 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.816323 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:23.31630318 +0000 UTC m=+18.929120886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.816570 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e"} Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.816707 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.817165 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.817518 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.817621 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.817753 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.818247 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.818334 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.818371 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.818605 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.818694 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.818724 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.815375 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.819082 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.819256 4832 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.819542 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.819614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.819836 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.822576 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.823764 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.823970 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.824804 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.824813 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.827503 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.827535 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.827552 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.827623 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:23.327603178 +0000 UTC m=+18.940420884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.828675 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.828693 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.828703 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.828740 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:23.328730935 +0000 UTC m=+18.941548641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.828917 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.829484 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.829520 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.829592 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.829835 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.829867 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.829940 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.830719 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.832121 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.832798 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.832856 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.833986 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.835327 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.835679 4832 scope.go:117] "RemoveContainer" containerID="64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.835934 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.841786 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.842013 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.842218 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.842238 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.842264 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.842282 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.842426 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.842938 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.843114 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.843205 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.843216 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.850305 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.852835 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.853601 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.867799 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.873905 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.874900 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.881583 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.885164 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: E1204 06:09:22.891693 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.897477 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.913671 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.913749 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.913796 4832 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.913871 4832 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.913917 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.913969 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914011 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914278 4832 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914293 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914305 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914327 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914339 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914350 4832 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914432 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914442 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914452 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914462 4832 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914472 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914481 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914491 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914501 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914511 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914520 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914533 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914543 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914554 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914564 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914575 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914586 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914597 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914607 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914617 4832 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914628 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914638 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914649 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914660 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914670 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914680 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914690 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914701 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914711 4832 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914721 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914731 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914742 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914753 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914762 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914773 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914782 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914794 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914806 4832 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914816 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914826 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914836 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914847 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914858 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914869 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914879 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914889 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914899 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914909 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914920 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914930 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914939 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914949 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914959 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914969 4832 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914978 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914988 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.914998 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.915009 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.915019 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.915030 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.915064 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.915075 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.915085 4832 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.915095 4832 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.915105 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.917586 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.936267 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.971888 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.978753 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 06:09:22 crc kubenswrapper[4832]: I1204 06:09:22.983324 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.317876 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.317944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.317966 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.318061 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.318072 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:09:24.318043745 +0000 UTC m=+19.930861451 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.318078 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.318136 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:24.318108557 +0000 UTC m=+19.930926263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.318151 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:24.318145138 +0000 UTC m=+19.930962834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.418333 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.418381 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.418572 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.418575 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.418632 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.418594 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.418653 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.418658 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.418715 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:24.418696888 +0000 UTC m=+20.031514594 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:23 crc kubenswrapper[4832]: E1204 06:09:23.418761 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:24.418737909 +0000 UTC m=+20.031555615 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.816831 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9c2d9dd1d12b109dcaa2bf05e33a28cbb187e812d11425c1bd9fe8e95fa8c3bf"} Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.818202 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49"} Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.818235 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"dd6ee1dc351e52bb5fa8196410810b4c025f8364a67a9197fde4b7fafe9b1bff"} Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.820483 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.822633 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2"} Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.822806 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.824701 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955"} Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.824741 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55"} Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.824751 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fbc856c8523480ce0c6fb8b272fb25248832cc4793ef05b740b07ae99e56812a"} Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.840543 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:23Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.853692 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:23Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.873356 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:23Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.948579 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:23Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:23 crc kubenswrapper[4832]: I1204 06:09:23.997604 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:23Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.012601 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.029607 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.053167 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.066837 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.087911 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.104468 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.123256 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.125501 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jl6q4"] Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.125888 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-97mnv"] Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.126013 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.126078 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-97mnv" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.127960 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.127988 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.128308 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.128469 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.128780 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.128858 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.129062 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.129372 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.148430 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.162233 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.176779 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.188170 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.204859 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.217193 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.223762 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4079cbc8-9860-412d-8bb8-37713e677d1c-rootfs\") pod \"machine-config-daemon-jl6q4\" (UID: \"4079cbc8-9860-412d-8bb8-37713e677d1c\") " pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.223841 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4079cbc8-9860-412d-8bb8-37713e677d1c-mcd-auth-proxy-config\") pod \"machine-config-daemon-jl6q4\" (UID: \"4079cbc8-9860-412d-8bb8-37713e677d1c\") " pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.223923 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zzrv\" (UniqueName: \"kubernetes.io/projected/1bc4584c-cbf3-472e-ab0e-1ada32291529-kube-api-access-8zzrv\") pod \"node-resolver-97mnv\" (UID: \"1bc4584c-cbf3-472e-ab0e-1ada32291529\") " pod="openshift-dns/node-resolver-97mnv" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.223950 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4079cbc8-9860-412d-8bb8-37713e677d1c-proxy-tls\") pod \"machine-config-daemon-jl6q4\" (UID: \"4079cbc8-9860-412d-8bb8-37713e677d1c\") " pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.223968 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1bc4584c-cbf3-472e-ab0e-1ada32291529-hosts-file\") pod \"node-resolver-97mnv\" (UID: \"1bc4584c-cbf3-472e-ab0e-1ada32291529\") " pod="openshift-dns/node-resolver-97mnv" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.223992 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4hbj\" (UniqueName: \"kubernetes.io/projected/4079cbc8-9860-412d-8bb8-37713e677d1c-kube-api-access-q4hbj\") pod \"machine-config-daemon-jl6q4\" (UID: \"4079cbc8-9860-412d-8bb8-37713e677d1c\") " pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.229075 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.241925 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.254585 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.266739 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.278921 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.325196 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.325291 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4079cbc8-9860-412d-8bb8-37713e677d1c-rootfs\") pod \"machine-config-daemon-jl6q4\" (UID: \"4079cbc8-9860-412d-8bb8-37713e677d1c\") " pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.325332 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4079cbc8-9860-412d-8bb8-37713e677d1c-mcd-auth-proxy-config\") pod \"machine-config-daemon-jl6q4\" (UID: \"4079cbc8-9860-412d-8bb8-37713e677d1c\") " pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.325374 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:09:26.325346997 +0000 UTC m=+21.938164703 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.325453 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.325477 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zzrv\" (UniqueName: \"kubernetes.io/projected/1bc4584c-cbf3-472e-ab0e-1ada32291529-kube-api-access-8zzrv\") pod \"node-resolver-97mnv\" (UID: \"1bc4584c-cbf3-472e-ab0e-1ada32291529\") " pod="openshift-dns/node-resolver-97mnv" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.325500 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4079cbc8-9860-412d-8bb8-37713e677d1c-proxy-tls\") pod \"machine-config-daemon-jl6q4\" (UID: \"4079cbc8-9860-412d-8bb8-37713e677d1c\") " pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.325517 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1bc4584c-cbf3-472e-ab0e-1ada32291529-hosts-file\") pod \"node-resolver-97mnv\" (UID: \"1bc4584c-cbf3-472e-ab0e-1ada32291529\") " pod="openshift-dns/node-resolver-97mnv" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.325537 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.325556 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4hbj\" (UniqueName: \"kubernetes.io/projected/4079cbc8-9860-412d-8bb8-37713e677d1c-kube-api-access-q4hbj\") pod \"machine-config-daemon-jl6q4\" (UID: \"4079cbc8-9860-412d-8bb8-37713e677d1c\") " pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.325848 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.325878 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:26.325871269 +0000 UTC m=+21.938688965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.326010 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4079cbc8-9860-412d-8bb8-37713e677d1c-mcd-auth-proxy-config\") pod \"machine-config-daemon-jl6q4\" (UID: \"4079cbc8-9860-412d-8bb8-37713e677d1c\") " pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.326072 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4079cbc8-9860-412d-8bb8-37713e677d1c-rootfs\") pod \"machine-config-daemon-jl6q4\" (UID: \"4079cbc8-9860-412d-8bb8-37713e677d1c\") " pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.326120 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1bc4584c-cbf3-472e-ab0e-1ada32291529-hosts-file\") pod \"node-resolver-97mnv\" (UID: \"1bc4584c-cbf3-472e-ab0e-1ada32291529\") " pod="openshift-dns/node-resolver-97mnv" Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.326182 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.326212 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:26.326205367 +0000 UTC m=+21.939023073 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.330285 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4079cbc8-9860-412d-8bb8-37713e677d1c-proxy-tls\") pod \"machine-config-daemon-jl6q4\" (UID: \"4079cbc8-9860-412d-8bb8-37713e677d1c\") " pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.346003 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zzrv\" (UniqueName: \"kubernetes.io/projected/1bc4584c-cbf3-472e-ab0e-1ada32291529-kube-api-access-8zzrv\") pod \"node-resolver-97mnv\" (UID: \"1bc4584c-cbf3-472e-ab0e-1ada32291529\") " pod="openshift-dns/node-resolver-97mnv" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.347290 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4hbj\" (UniqueName: \"kubernetes.io/projected/4079cbc8-9860-412d-8bb8-37713e677d1c-kube-api-access-q4hbj\") pod \"machine-config-daemon-jl6q4\" (UID: \"4079cbc8-9860-412d-8bb8-37713e677d1c\") " pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.426444 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.426510 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.426635 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.426654 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.426667 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.426671 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.426710 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.426724 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.426731 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:26.426714646 +0000 UTC m=+22.039532352 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.426788 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:26.426769917 +0000 UTC m=+22.039587693 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.438272 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-97mnv" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.444753 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:09:24 crc kubenswrapper[4832]: W1204 06:09:24.449111 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bc4584c_cbf3_472e_ab0e_1ada32291529.slice/crio-a895162b9a05d10bfbf7ff1c777d4ebbf42dc42c1cc1497c335fc88781bfd025 WatchSource:0}: Error finding container a895162b9a05d10bfbf7ff1c777d4ebbf42dc42c1cc1497c335fc88781bfd025: Status 404 returned error can't find the container with id a895162b9a05d10bfbf7ff1c777d4ebbf42dc42c1cc1497c335fc88781bfd025 Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.540059 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jg77n"] Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.540697 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.542019 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9nl9n"] Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.542411 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.546728 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.547136 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.547252 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.547253 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.547353 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.553095 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.553284 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.596248 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.627991 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628025 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/289c102f-5bf1-46ae-84a5-37ab6ced4618-os-release\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628066 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-multus-cni-dir\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628083 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-run-netns\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628104 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-var-lib-kubelet\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628236 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-hostroot\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628261 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-os-release\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628278 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-var-lib-cni-multus\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628296 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfstt\" (UniqueName: \"kubernetes.io/projected/289c102f-5bf1-46ae-84a5-37ab6ced4618-kube-api-access-kfstt\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628314 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-run-k8s-cni-cncf-io\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628331 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/289c102f-5bf1-46ae-84a5-37ab6ced4618-cnibin\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628347 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-multus-socket-dir-parent\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628376 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-multus-conf-dir\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628414 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/289c102f-5bf1-46ae-84a5-37ab6ced4618-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628427 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/325cffd3-4d6a-4916-8ad9-743cdc486769-cni-binary-copy\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628441 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-run-multus-certs\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628457 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-etc-kubernetes\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628472 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/289c102f-5bf1-46ae-84a5-37ab6ced4618-cni-binary-copy\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628488 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/289c102f-5bf1-46ae-84a5-37ab6ced4618-system-cni-dir\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628504 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/289c102f-5bf1-46ae-84a5-37ab6ced4618-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628519 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-system-cni-dir\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628596 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-cnibin\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628641 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-var-lib-cni-bin\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628657 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/325cffd3-4d6a-4916-8ad9-743cdc486769-multus-daemon-config\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.628686 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8d2t\" (UniqueName: \"kubernetes.io/projected/325cffd3-4d6a-4916-8ad9-743cdc486769-kube-api-access-v8d2t\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.644669 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.657435 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.668740 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.684770 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.699726 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.710344 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.710432 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.710467 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.710450 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.710546 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:24 crc kubenswrapper[4832]: E1204 06:09:24.710621 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.714222 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.714787 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.715508 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.716078 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.716780 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.717802 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.718358 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.718971 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.720168 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.720791 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.721844 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.722380 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.723539 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.724306 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.725009 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.725910 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.726675 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.727670 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.727981 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.728095 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.728705 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729484 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-multus-cni-dir\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729510 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-run-netns\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729531 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-var-lib-kubelet\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729556 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-hostroot\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729570 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-os-release\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729585 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfstt\" (UniqueName: \"kubernetes.io/projected/289c102f-5bf1-46ae-84a5-37ab6ced4618-kube-api-access-kfstt\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729601 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-var-lib-cni-multus\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729617 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-run-k8s-cni-cncf-io\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729633 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/289c102f-5bf1-46ae-84a5-37ab6ced4618-cnibin\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729647 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-multus-socket-dir-parent\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729668 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-multus-conf-dir\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729681 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/325cffd3-4d6a-4916-8ad9-743cdc486769-cni-binary-copy\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729694 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-run-multus-certs\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729708 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-etc-kubernetes\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729720 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730195 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730296 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/289c102f-5bf1-46ae-84a5-37ab6ced4618-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730305 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-var-lib-cni-multus\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730305 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-multus-conf-dir\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730329 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-run-k8s-cni-cncf-io\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730367 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-run-multus-certs\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730412 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-multus-cni-dir\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730422 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-multus-socket-dir-parent\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.729726 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/289c102f-5bf1-46ae-84a5-37ab6ced4618-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730448 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-etc-kubernetes\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730450 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-var-lib-kubelet\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730460 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-hostroot\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730477 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-run-netns\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/289c102f-5bf1-46ae-84a5-37ab6ced4618-cni-binary-copy\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730532 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/289c102f-5bf1-46ae-84a5-37ab6ced4618-cnibin\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-cnibin\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730638 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-var-lib-cni-bin\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730662 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/325cffd3-4d6a-4916-8ad9-743cdc486769-multus-daemon-config\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730706 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-host-var-lib-cni-bin\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730747 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/289c102f-5bf1-46ae-84a5-37ab6ced4618-system-cni-dir\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730753 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-os-release\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730763 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/289c102f-5bf1-46ae-84a5-37ab6ced4618-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730793 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-system-cni-dir\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730809 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8d2t\" (UniqueName: \"kubernetes.io/projected/325cffd3-4d6a-4916-8ad9-743cdc486769-kube-api-access-v8d2t\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.730844 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-system-cni-dir\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.731006 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/289c102f-5bf1-46ae-84a5-37ab6ced4618-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.731011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/289c102f-5bf1-46ae-84a5-37ab6ced4618-os-release\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.731036 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/289c102f-5bf1-46ae-84a5-37ab6ced4618-cni-binary-copy\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.731039 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/325cffd3-4d6a-4916-8ad9-743cdc486769-cni-binary-copy\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.731050 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/289c102f-5bf1-46ae-84a5-37ab6ced4618-os-release\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.731019 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/289c102f-5bf1-46ae-84a5-37ab6ced4618-system-cni-dir\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.731049 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/325cffd3-4d6a-4916-8ad9-743cdc486769-cnibin\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.731306 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/325cffd3-4d6a-4916-8ad9-743cdc486769-multus-daemon-config\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.732117 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.732619 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.733730 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.734136 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.734871 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.736060 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.736570 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.737578 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.738600 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.739432 4832 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.739527 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.741159 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.742089 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.742637 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.742750 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.744251 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.744955 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.745849 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.747201 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.748749 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.749453 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.750060 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.751212 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.752556 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfstt\" (UniqueName: \"kubernetes.io/projected/289c102f-5bf1-46ae-84a5-37ab6ced4618-kube-api-access-kfstt\") pod \"multus-additional-cni-plugins-jg77n\" (UID: \"289c102f-5bf1-46ae-84a5-37ab6ced4618\") " pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.752762 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8d2t\" (UniqueName: \"kubernetes.io/projected/325cffd3-4d6a-4916-8ad9-743cdc486769-kube-api-access-v8d2t\") pod \"multus-9nl9n\" (UID: \"325cffd3-4d6a-4916-8ad9-743cdc486769\") " pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.752848 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.753292 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.754319 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.755009 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.756174 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.756863 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.757360 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.757826 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.758345 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.758936 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.759968 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.760986 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.771072 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.785431 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.796250 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.808220 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.820852 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.829043 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-97mnv" event={"ID":"1bc4584c-cbf3-472e-ab0e-1ada32291529","Type":"ContainerStarted","Data":"edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425"} Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.829086 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-97mnv" event={"ID":"1bc4584c-cbf3-472e-ab0e-1ada32291529","Type":"ContainerStarted","Data":"a895162b9a05d10bfbf7ff1c777d4ebbf42dc42c1cc1497c335fc88781bfd025"} Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.831035 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2"} Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.831098 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2"} Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.831110 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"59b2a5d3c15c667aa47de9a2c2dc0386a809b79e2ef78dc5825d03066855ca7b"} Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.842334 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.857587 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.867890 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.872244 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jg77n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.880307 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9nl9n" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.881114 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.893547 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.906219 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.919764 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.931613 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.938010 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zdmhj"] Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.938949 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.940447 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.940809 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.942318 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.942349 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.942532 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.942988 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.943061 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.946139 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.962858 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.984049 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:24 crc kubenswrapper[4832]: I1204 06:09:24.996673 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.008789 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.019444 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.031209 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.033636 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-cni-bin\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.033666 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-systemd-units\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.033681 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-kubelet\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.033695 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-log-socket\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.033713 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwds7\" (UniqueName: \"kubernetes.io/projected/c442d280-de5c-4240-90b3-af48bbb2f1c5-kube-api-access-cwds7\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.033731 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-run-netns\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.033747 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-systemd\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.033764 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovnkube-config\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.033779 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovnkube-script-lib\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.033824 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-openvswitch\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.033925 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.033988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-slash\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.034019 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.034085 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-env-overrides\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.034113 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-var-lib-openvswitch\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.034152 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovn-node-metrics-cert\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.034211 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-ovn\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.034233 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-node-log\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.034363 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-etc-openvswitch\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.034450 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-cni-netd\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.044792 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.058436 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.076333 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.088277 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.101830 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.114276 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.126025 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135361 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-var-lib-openvswitch\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135419 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovn-node-metrics-cert\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135452 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-ovn\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135476 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-node-log\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135497 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-cni-netd\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135516 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-etc-openvswitch\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135540 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-systemd-units\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135559 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-cni-bin\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135580 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-kubelet\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135597 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-log-socket\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135619 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwds7\" (UniqueName: \"kubernetes.io/projected/c442d280-de5c-4240-90b3-af48bbb2f1c5-kube-api-access-cwds7\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135893 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-cni-bin\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135895 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-cni-netd\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135929 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-var-lib-openvswitch\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135946 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-kubelet\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135972 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-systemd-units\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135956 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-etc-openvswitch\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.135997 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-log-socket\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.136016 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-ovn\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.136040 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-node-log\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.136609 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-run-netns\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.136695 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-systemd\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.136840 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovnkube-config\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.136867 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovnkube-script-lib\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.136896 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-openvswitch\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.136917 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.136944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-slash\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.136936 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.136993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.136962 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.137030 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-run-netns\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.137114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-env-overrides\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.137132 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-systemd\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.137679 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-env-overrides\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.137727 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-openvswitch\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.137865 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.137896 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-slash\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.138255 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovnkube-script-lib\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.138896 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovnkube-config\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.145668 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovn-node-metrics-cert\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.150331 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.151326 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwds7\" (UniqueName: \"kubernetes.io/projected/c442d280-de5c-4240-90b3-af48bbb2f1c5-kube-api-access-cwds7\") pod \"ovnkube-node-zdmhj\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.161013 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.170912 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.185696 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.197539 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.450339 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:25 crc kubenswrapper[4832]: W1204 06:09:25.464574 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc442d280_de5c_4240_90b3_af48bbb2f1c5.slice/crio-bf45605bf835942db20b8bd280dc8c984e3f4a06274b42404c007fd10d531089 WatchSource:0}: Error finding container bf45605bf835942db20b8bd280dc8c984e3f4a06274b42404c007fd10d531089: Status 404 returned error can't find the container with id bf45605bf835942db20b8bd280dc8c984e3f4a06274b42404c007fd10d531089 Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.751678 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.758274 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.761448 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.769141 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.780942 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.793400 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.808104 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.821991 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.838018 4832 generic.go:334] "Generic (PLEG): container finished" podID="289c102f-5bf1-46ae-84a5-37ab6ced4618" containerID="1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b" exitCode=0 Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.838070 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" event={"ID":"289c102f-5bf1-46ae-84a5-37ab6ced4618","Type":"ContainerDied","Data":"1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b"} Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.839160 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" event={"ID":"289c102f-5bf1-46ae-84a5-37ab6ced4618","Type":"ContainerStarted","Data":"1130a2eccb901fc4ba7c375ea9b04b9538400c2c9b9918658a37d034fc50fd09"} Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.839439 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db"} Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.840738 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167" exitCode=0 Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.840773 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167"} Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.840802 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"bf45605bf835942db20b8bd280dc8c984e3f4a06274b42404c007fd10d531089"} Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.842438 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nl9n" event={"ID":"325cffd3-4d6a-4916-8ad9-743cdc486769","Type":"ContainerStarted","Data":"145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf"} Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.842474 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nl9n" event={"ID":"325cffd3-4d6a-4916-8ad9-743cdc486769","Type":"ContainerStarted","Data":"5cc7356b81ea2c68df8b763d31536c0f76408d78134269c1ba9e065ecc2f1223"} Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.843749 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.865189 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.875320 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.888280 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.903259 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.920801 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.946619 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.958609 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:25 crc kubenswrapper[4832]: I1204 06:09:25.970905 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:25Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.007756 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.032774 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.053254 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.060282 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dqplg"] Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.060654 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dqplg" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.065849 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.065909 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.066054 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.066704 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.076658 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.093043 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.094908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.094938 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.094946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.095037 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.095509 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.102328 4832 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.102626 4832 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.103726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.103759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.103770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.103788 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.103799 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.111316 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.119342 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.122542 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.122579 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.122589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.122604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.122614 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.127343 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.133213 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.137006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.137156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.137244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.137307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.137362 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.137771 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.150676 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.152463 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.153856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.153898 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.153912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.153929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.153940 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.157885 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/546cfc29-fe8f-4952-999c-11f1f024aee2-serviceca\") pod \"node-ca-dqplg\" (UID: \"546cfc29-fe8f-4952-999c-11f1f024aee2\") " pod="openshift-image-registry/node-ca-dqplg" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.158027 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d9nl\" (UniqueName: \"kubernetes.io/projected/546cfc29-fe8f-4952-999c-11f1f024aee2-kube-api-access-7d9nl\") pod \"node-ca-dqplg\" (UID: \"546cfc29-fe8f-4952-999c-11f1f024aee2\") " pod="openshift-image-registry/node-ca-dqplg" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.158050 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/546cfc29-fe8f-4952-999c-11f1f024aee2-host\") pod \"node-ca-dqplg\" (UID: \"546cfc29-fe8f-4952-999c-11f1f024aee2\") " pod="openshift-image-registry/node-ca-dqplg" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.164925 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.165880 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.168937 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.169059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.169157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.169220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.169292 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.185175 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.185290 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.188312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.188342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.188352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.188365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.188374 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.189961 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.232242 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.258670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/546cfc29-fe8f-4952-999c-11f1f024aee2-serviceca\") pod \"node-ca-dqplg\" (UID: \"546cfc29-fe8f-4952-999c-11f1f024aee2\") " pod="openshift-image-registry/node-ca-dqplg" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.258718 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d9nl\" (UniqueName: \"kubernetes.io/projected/546cfc29-fe8f-4952-999c-11f1f024aee2-kube-api-access-7d9nl\") pod \"node-ca-dqplg\" (UID: \"546cfc29-fe8f-4952-999c-11f1f024aee2\") " pod="openshift-image-registry/node-ca-dqplg" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.258743 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/546cfc29-fe8f-4952-999c-11f1f024aee2-host\") pod \"node-ca-dqplg\" (UID: \"546cfc29-fe8f-4952-999c-11f1f024aee2\") " pod="openshift-image-registry/node-ca-dqplg" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.258826 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/546cfc29-fe8f-4952-999c-11f1f024aee2-host\") pod \"node-ca-dqplg\" (UID: \"546cfc29-fe8f-4952-999c-11f1f024aee2\") " pod="openshift-image-registry/node-ca-dqplg" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.259706 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/546cfc29-fe8f-4952-999c-11f1f024aee2-serviceca\") pod \"node-ca-dqplg\" (UID: \"546cfc29-fe8f-4952-999c-11f1f024aee2\") " pod="openshift-image-registry/node-ca-dqplg" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.268825 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.290962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.291014 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.291025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.291043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.291053 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.298414 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d9nl\" (UniqueName: \"kubernetes.io/projected/546cfc29-fe8f-4952-999c-11f1f024aee2-kube-api-access-7d9nl\") pod \"node-ca-dqplg\" (UID: \"546cfc29-fe8f-4952-999c-11f1f024aee2\") " pod="openshift-image-registry/node-ca-dqplg" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.327194 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.359044 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.359164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.359244 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.359290 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:09:30.359267766 +0000 UTC m=+25.972085532 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.359351 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.359402 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.359417 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:30.359404449 +0000 UTC m=+25.972222145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.359435 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:30.35942763 +0000 UTC m=+25.972245466 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.367765 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.380002 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dqplg" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.393622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.393674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.393683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.393697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.393749 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.408736 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.454775 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.460518 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.460594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.460797 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.460830 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.460847 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.460908 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:30.460888962 +0000 UTC m=+26.073706688 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.461157 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.461199 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.461213 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.461271 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:30.46125378 +0000 UTC m=+26.074071486 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.494770 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.496088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.496118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.496129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.496146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.496157 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.528950 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.567177 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.598294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.598326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.598337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.598352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.598363 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.609872 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.648249 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.690963 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.700481 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.700523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.700535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.700551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.700563 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.710356 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.710477 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.710356 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.710526 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.710614 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:26 crc kubenswrapper[4832]: E1204 06:09:26.710693 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.728551 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.769073 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.803087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.803135 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.803146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.803163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.803176 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.847255 4832 generic.go:334] "Generic (PLEG): container finished" podID="289c102f-5bf1-46ae-84a5-37ab6ced4618" containerID="d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe" exitCode=0 Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.847328 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" event={"ID":"289c102f-5bf1-46ae-84a5-37ab6ced4618","Type":"ContainerDied","Data":"d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.853122 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.853175 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.853195 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.853210 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.853225 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.853242 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.855536 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dqplg" event={"ID":"546cfc29-fe8f-4952-999c-11f1f024aee2","Type":"ContainerStarted","Data":"589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.855580 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dqplg" event={"ID":"546cfc29-fe8f-4952-999c-11f1f024aee2","Type":"ContainerStarted","Data":"a692a25ce2c3cfe95c88c76248fad56d187ad711812fdd18521f7769a559862c"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.867737 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.881408 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.900443 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.905277 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.905313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.905322 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.905340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.905350 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:26Z","lastTransitionTime":"2025-12-04T06:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.929479 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:26 crc kubenswrapper[4832]: I1204 06:09:26.970341 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:26Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.007655 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.008945 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.008979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.009021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.009042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.009055 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:27Z","lastTransitionTime":"2025-12-04T06:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.050653 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.094787 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.111443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.111502 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.111515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.111582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.111596 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:27Z","lastTransitionTime":"2025-12-04T06:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.127267 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.169199 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.209861 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.213683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.213746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.213759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.213795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.213812 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:27Z","lastTransitionTime":"2025-12-04T06:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.251279 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.290704 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.317252 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.317317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.317334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.317360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.317374 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:27Z","lastTransitionTime":"2025-12-04T06:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.329721 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.372584 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.412165 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.419985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.420060 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.420075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.420103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.420115 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:27Z","lastTransitionTime":"2025-12-04T06:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.449547 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.489682 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.522857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.522917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.522929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.522948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.522959 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:27Z","lastTransitionTime":"2025-12-04T06:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.530128 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.568194 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.611911 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.625099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.625140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.625150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.625164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.625175 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:27Z","lastTransitionTime":"2025-12-04T06:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.653570 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.689470 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.727917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.727960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.727973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.727991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.728002 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:27Z","lastTransitionTime":"2025-12-04T06:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.729261 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.774755 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.814564 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.830811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.830863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.830879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.830920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.830936 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:27Z","lastTransitionTime":"2025-12-04T06:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.848968 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.859311 4832 generic.go:334] "Generic (PLEG): container finished" podID="289c102f-5bf1-46ae-84a5-37ab6ced4618" containerID="d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9" exitCode=0 Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.859778 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" event={"ID":"289c102f-5bf1-46ae-84a5-37ab6ced4618","Type":"ContainerDied","Data":"d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9"} Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.887891 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.933675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.933737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.933750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.933773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.933789 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:27Z","lastTransitionTime":"2025-12-04T06:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.934813 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:27 crc kubenswrapper[4832]: I1204 06:09:27.972090 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:27Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.007263 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.036369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.036448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.036457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.036474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.036516 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:28Z","lastTransitionTime":"2025-12-04T06:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.053928 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.090169 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.128103 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.139263 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.139304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.139313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.139330 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.139340 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:28Z","lastTransitionTime":"2025-12-04T06:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.168251 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.207174 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.242188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.242231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.242242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.242259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.242269 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:28Z","lastTransitionTime":"2025-12-04T06:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.250427 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.288423 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.328350 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.344731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.344784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.344796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.344811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.344823 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:28Z","lastTransitionTime":"2025-12-04T06:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.369853 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.419895 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.453648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.453711 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.453721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.453737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.453753 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:28Z","lastTransitionTime":"2025-12-04T06:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.462564 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.556256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.556290 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.556298 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.556312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.556321 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:28Z","lastTransitionTime":"2025-12-04T06:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.658240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.658301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.658310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.658325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.658335 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:28Z","lastTransitionTime":"2025-12-04T06:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.709824 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.709898 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.709935 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:28 crc kubenswrapper[4832]: E1204 06:09:28.710198 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:28 crc kubenswrapper[4832]: E1204 06:09:28.710330 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:28 crc kubenswrapper[4832]: E1204 06:09:28.710450 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.761124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.761160 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.761169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.761184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.761192 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:28Z","lastTransitionTime":"2025-12-04T06:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.863264 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.863311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.863327 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.863346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.863361 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:28Z","lastTransitionTime":"2025-12-04T06:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.866062 4832 generic.go:334] "Generic (PLEG): container finished" podID="289c102f-5bf1-46ae-84a5-37ab6ced4618" containerID="71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6" exitCode=0 Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.866121 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" event={"ID":"289c102f-5bf1-46ae-84a5-37ab6ced4618","Type":"ContainerDied","Data":"71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6"} Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.870771 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4"} Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.879704 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.897347 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.910252 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.921288 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.933509 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.948173 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.959914 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.965795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.965834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.965843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.965857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.965866 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:28Z","lastTransitionTime":"2025-12-04T06:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.970186 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.983469 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:28 crc kubenswrapper[4832]: I1204 06:09:28.998293 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.017992 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.031415 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.043726 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.052858 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.067586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.067614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.067623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.067636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.067644 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:29Z","lastTransitionTime":"2025-12-04T06:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.169428 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.169469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.169479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.169496 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.169507 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:29Z","lastTransitionTime":"2025-12-04T06:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.272017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.272055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.272067 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.272082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.272093 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:29Z","lastTransitionTime":"2025-12-04T06:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.374239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.374280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.374292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.374309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.374321 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:29Z","lastTransitionTime":"2025-12-04T06:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.476727 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.476755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.476763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.476775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.476783 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:29Z","lastTransitionTime":"2025-12-04T06:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.578828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.578892 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.578914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.579025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.579055 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:29Z","lastTransitionTime":"2025-12-04T06:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.680949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.680990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.681006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.681022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.681032 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:29Z","lastTransitionTime":"2025-12-04T06:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.783299 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.783341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.783352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.783370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.783381 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:29Z","lastTransitionTime":"2025-12-04T06:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.877567 4832 generic.go:334] "Generic (PLEG): container finished" podID="289c102f-5bf1-46ae-84a5-37ab6ced4618" containerID="7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e" exitCode=0 Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.877656 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" event={"ID":"289c102f-5bf1-46ae-84a5-37ab6ced4618","Type":"ContainerDied","Data":"7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e"} Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.888311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.888341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.888351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.888363 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.888373 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:29Z","lastTransitionTime":"2025-12-04T06:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.897911 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.910748 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.920790 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.932893 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.944690 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.955487 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.968187 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.979008 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.991340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.991424 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.991438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.991455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.991467 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:29Z","lastTransitionTime":"2025-12-04T06:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:29 crc kubenswrapper[4832]: I1204 06:09:29.991518 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:29Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.005786 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:30Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.015939 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:30Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.029913 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:30Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.041925 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:30Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.065333 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:30Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.094936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.094966 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.094975 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.094987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.094998 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:30Z","lastTransitionTime":"2025-12-04T06:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.197268 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.197330 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.197340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.197353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.197361 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:30Z","lastTransitionTime":"2025-12-04T06:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.302085 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.302122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.302131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.302146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.302166 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:30Z","lastTransitionTime":"2025-12-04T06:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.397888 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.398101 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:09:38.398076718 +0000 UTC m=+34.010894424 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.398278 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.398364 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.398440 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.398606 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:38.3985924 +0000 UTC m=+34.011410106 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.398508 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.398740 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:38.398729843 +0000 UTC m=+34.011547649 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.404535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.404574 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.404586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.404656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.404717 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:30Z","lastTransitionTime":"2025-12-04T06:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.499615 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.499666 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.499944 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.500122 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.500139 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.500238 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:38.500216446 +0000 UTC m=+34.113034172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.500659 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.500699 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.500715 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.500782 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:38.500764818 +0000 UTC m=+34.113582604 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.507110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.507219 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.507278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.507357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.507442 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:30Z","lastTransitionTime":"2025-12-04T06:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.609271 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.609318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.609333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.609350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.609362 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:30Z","lastTransitionTime":"2025-12-04T06:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.709897 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.709949 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.709965 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.710036 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.710145 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:30 crc kubenswrapper[4832]: E1204 06:09:30.710236 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.711504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.711543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.711552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.711564 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.711573 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:30Z","lastTransitionTime":"2025-12-04T06:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.813839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.813876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.813888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.813905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.813919 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:30Z","lastTransitionTime":"2025-12-04T06:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.893510 4832 generic.go:334] "Generic (PLEG): container finished" podID="289c102f-5bf1-46ae-84a5-37ab6ced4618" containerID="034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785" exitCode=0 Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.893565 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" event={"ID":"289c102f-5bf1-46ae-84a5-37ab6ced4618","Type":"ContainerDied","Data":"034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785"} Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.897486 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800"} Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.897715 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.910322 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:30Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.916086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.916118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.916128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.916144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.916156 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:30Z","lastTransitionTime":"2025-12-04T06:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.921088 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.924271 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:30Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.950498 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:30Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.973039 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:30Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.984841 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:30Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:30 crc kubenswrapper[4832]: I1204 06:09:30.997602 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:30Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.011824 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.019114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.019147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.019157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.019173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.019182 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:31Z","lastTransitionTime":"2025-12-04T06:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.027981 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.039653 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.051924 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.064686 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.078206 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.089901 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.101322 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.113437 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.121311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.121353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.121365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.121381 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.121404 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:31Z","lastTransitionTime":"2025-12-04T06:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.126141 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.134872 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.145973 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.156734 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.168697 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.181932 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.191421 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.201667 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.211008 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.223525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.223557 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.223567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.223580 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.223591 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:31Z","lastTransitionTime":"2025-12-04T06:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.224850 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.242139 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.253810 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.264265 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.325752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.325814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.325831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.325855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.325871 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:31Z","lastTransitionTime":"2025-12-04T06:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.428097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.428350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.428455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.428545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.428651 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:31Z","lastTransitionTime":"2025-12-04T06:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.531709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.531798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.531820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.531848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.531868 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:31Z","lastTransitionTime":"2025-12-04T06:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.556535 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.635294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.635339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.635348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.635366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.635379 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:31Z","lastTransitionTime":"2025-12-04T06:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.737363 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.737418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.737427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.737440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.737450 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:31Z","lastTransitionTime":"2025-12-04T06:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.839015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.839044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.839052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.839066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.839075 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:31Z","lastTransitionTime":"2025-12-04T06:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.904386 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" event={"ID":"289c102f-5bf1-46ae-84a5-37ab6ced4618","Type":"ContainerStarted","Data":"118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6"} Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.904982 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.919670 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.926172 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.941207 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.941466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.941581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.941668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.941778 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:31Z","lastTransitionTime":"2025-12-04T06:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.942227 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.955723 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.968528 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.978837 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:31 crc kubenswrapper[4832]: I1204 06:09:31.991571 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:31Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.006616 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.017887 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.031811 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.043219 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.044375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.044471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.044488 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.044512 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.044528 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:32Z","lastTransitionTime":"2025-12-04T06:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.063140 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.078008 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.088049 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.100552 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.111147 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.121767 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.141724 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.146119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.146146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.146155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.146169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.146179 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:32Z","lastTransitionTime":"2025-12-04T06:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.153473 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.167009 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.177039 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.191607 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.204304 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.214435 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.224974 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.234055 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.248432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.248497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.248511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.248528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.248538 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:32Z","lastTransitionTime":"2025-12-04T06:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.250093 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.260955 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.271690 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:32Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.350854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.350894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.350905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.350920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.350931 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:32Z","lastTransitionTime":"2025-12-04T06:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.453360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.453466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.453486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.453510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.453529 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:32Z","lastTransitionTime":"2025-12-04T06:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.555946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.555984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.555993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.556007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.556018 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:32Z","lastTransitionTime":"2025-12-04T06:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.658565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.658610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.658621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.658638 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.658651 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:32Z","lastTransitionTime":"2025-12-04T06:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.710335 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.710347 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:32 crc kubenswrapper[4832]: E1204 06:09:32.710560 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.710614 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:32 crc kubenswrapper[4832]: E1204 06:09:32.710710 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:32 crc kubenswrapper[4832]: E1204 06:09:32.710771 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.761121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.761172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.761184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.761201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.761213 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:32Z","lastTransitionTime":"2025-12-04T06:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.863530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.863578 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.863590 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.863606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.863616 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:32Z","lastTransitionTime":"2025-12-04T06:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.966623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.966671 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.966686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.966702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:32 crc kubenswrapper[4832]: I1204 06:09:32.966711 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:32Z","lastTransitionTime":"2025-12-04T06:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.068970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.069231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.069344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.069458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.069568 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:33Z","lastTransitionTime":"2025-12-04T06:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.171357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.171676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.171832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.171954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.172074 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:33Z","lastTransitionTime":"2025-12-04T06:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.275285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.275487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.275581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.275673 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.275779 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:33Z","lastTransitionTime":"2025-12-04T06:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.377566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.377630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.377650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.377674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.377689 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:33Z","lastTransitionTime":"2025-12-04T06:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.479935 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.479981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.479992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.480009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.480020 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:33Z","lastTransitionTime":"2025-12-04T06:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.582317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.582366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.582415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.582471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.582487 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:33Z","lastTransitionTime":"2025-12-04T06:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.685491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.685577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.685588 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.685612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.685625 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:33Z","lastTransitionTime":"2025-12-04T06:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.788803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.788870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.788881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.788902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.788915 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:33Z","lastTransitionTime":"2025-12-04T06:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.891663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.891732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.891750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.891774 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.891795 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:33Z","lastTransitionTime":"2025-12-04T06:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.911982 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/0.log" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.914858 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800" exitCode=1 Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.914898 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800"} Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.916744 4832 scope.go:117] "RemoveContainer" containerID="94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.929438 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:33Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.940816 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:33Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.950729 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:33Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.964659 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:33Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.978085 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:33Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.993781 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:33Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.994925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.994968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.994983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.995002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:33 crc kubenswrapper[4832]: I1204 06:09:33.995016 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:33Z","lastTransitionTime":"2025-12-04T06:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.008603 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.021905 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.035655 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.048109 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.056910 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.068520 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.098080 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.098132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.098143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.098163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.098175 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:34Z","lastTransitionTime":"2025-12-04T06:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.106413 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.130982 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:33Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 06:09:33.330099 6119 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 06:09:33.330120 6119 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 06:09:33.330125 6119 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 06:09:33.330136 6119 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 06:09:33.330145 6119 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 06:09:33.330155 6119 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1204 06:09:33.330159 6119 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1204 06:09:33.330192 6119 factory.go:656] Stopping watch factory\\\\nI1204 06:09:33.330202 6119 ovnkube.go:599] Stopped ovnkube\\\\nI1204 06:09:33.330223 6119 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 06:09:33.330229 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 06:09:33.330236 6119 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 06:09:33.330241 6119 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 06:09:33.330246 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 06:09:33.330251 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.200312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.200361 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.200373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.200410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.200424 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:34Z","lastTransitionTime":"2025-12-04T06:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.303289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.303347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.303361 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.303381 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.303414 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:34Z","lastTransitionTime":"2025-12-04T06:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.409636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.409672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.409688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.409704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.409714 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:34Z","lastTransitionTime":"2025-12-04T06:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.512238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.512268 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.512277 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.512292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.512302 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:34Z","lastTransitionTime":"2025-12-04T06:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.614033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.614064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.614074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.614090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.614102 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:34Z","lastTransitionTime":"2025-12-04T06:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.711839 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:34 crc kubenswrapper[4832]: E1204 06:09:34.711937 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.712151 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:34 crc kubenswrapper[4832]: E1204 06:09:34.712196 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.712228 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:34 crc kubenswrapper[4832]: E1204 06:09:34.712266 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.716380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.716509 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.716585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.716651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.716709 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:34Z","lastTransitionTime":"2025-12-04T06:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.724791 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.735792 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.809198 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:33Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 06:09:33.330099 6119 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 06:09:33.330120 6119 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 06:09:33.330125 6119 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 06:09:33.330136 6119 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 06:09:33.330145 6119 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 06:09:33.330155 6119 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1204 06:09:33.330159 6119 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1204 06:09:33.330192 6119 factory.go:656] Stopping watch factory\\\\nI1204 06:09:33.330202 6119 ovnkube.go:599] Stopped ovnkube\\\\nI1204 06:09:33.330223 6119 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 06:09:33.330229 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 06:09:33.330236 6119 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 06:09:33.330241 6119 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 06:09:33.330246 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 06:09:33.330251 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.819815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.819854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.819866 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.819883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.819894 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:34Z","lastTransitionTime":"2025-12-04T06:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.822692 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.835819 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.845367 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.857091 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.868041 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.878209 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.888958 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.902693 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.913916 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.920359 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/0.log" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.921355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.921480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.921497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.921514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.921525 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:34Z","lastTransitionTime":"2025-12-04T06:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.923363 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e"} Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.923771 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.926180 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.936083 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.946116 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.955859 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.964378 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.975634 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.985855 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:34 crc kubenswrapper[4832]: I1204 06:09:34.995503 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.009244 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:35Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.018660 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:35Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.023992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.024043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.024062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.024084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.024098 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:35Z","lastTransitionTime":"2025-12-04T06:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.032175 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:35Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.043567 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:35Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.052899 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:35Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.063148 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:35Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.073616 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:35Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.089156 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:33Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 06:09:33.330099 6119 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 06:09:33.330120 6119 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 06:09:33.330125 6119 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 06:09:33.330136 6119 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 06:09:33.330145 6119 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 06:09:33.330155 6119 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1204 06:09:33.330159 6119 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1204 06:09:33.330192 6119 factory.go:656] Stopping watch factory\\\\nI1204 06:09:33.330202 6119 ovnkube.go:599] Stopped ovnkube\\\\nI1204 06:09:33.330223 6119 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 06:09:33.330229 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 06:09:33.330236 6119 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 06:09:33.330241 6119 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 06:09:33.330246 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 06:09:33.330251 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:35Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.126882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.126922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.126933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.126947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.126959 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:35Z","lastTransitionTime":"2025-12-04T06:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.229692 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.229922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.230001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.230134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.230199 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:35Z","lastTransitionTime":"2025-12-04T06:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.333369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.333417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.333426 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.333439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.333448 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:35Z","lastTransitionTime":"2025-12-04T06:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.436155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.436429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.436591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.436697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.436785 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:35Z","lastTransitionTime":"2025-12-04T06:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.539371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.539421 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.539429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.539442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.539451 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:35Z","lastTransitionTime":"2025-12-04T06:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.642130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.642166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.642177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.642195 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.642207 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:35Z","lastTransitionTime":"2025-12-04T06:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.745654 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.745725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.745751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.745781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.745800 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:35Z","lastTransitionTime":"2025-12-04T06:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.847965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.848026 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.848037 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.848057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.848068 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:35Z","lastTransitionTime":"2025-12-04T06:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.927528 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/1.log" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.928284 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/0.log" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.931057 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e" exitCode=1 Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.931098 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e"} Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.931140 4832 scope.go:117] "RemoveContainer" containerID="94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.931773 4832 scope.go:117] "RemoveContainer" containerID="b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e" Dec 04 06:09:35 crc kubenswrapper[4832]: E1204 06:09:35.931949 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.949763 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:35Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.949974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.950021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.950033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.950049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.950062 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:35Z","lastTransitionTime":"2025-12-04T06:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.962469 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:35Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.973835 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:35Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:35 crc kubenswrapper[4832]: I1204 06:09:35.986676 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:35Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.004367 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.017214 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.029935 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.043690 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.052167 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.052204 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.052212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.052226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.052235 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.057511 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.069775 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.078741 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.089648 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.099857 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.122497 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:33Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 06:09:33.330099 6119 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 06:09:33.330120 6119 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 06:09:33.330125 6119 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 06:09:33.330136 6119 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 06:09:33.330145 6119 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 06:09:33.330155 6119 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1204 06:09:33.330159 6119 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1204 06:09:33.330192 6119 factory.go:656] Stopping watch factory\\\\nI1204 06:09:33.330202 6119 ovnkube.go:599] Stopped ovnkube\\\\nI1204 06:09:33.330223 6119 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 06:09:33.330229 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 06:09:33.330236 6119 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 06:09:33.330241 6119 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 06:09:33.330246 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 06:09:33.330251 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:35Z\\\",\\\"message\\\":\\\"protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 06:09:34.736750 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 06:09:34.736738 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 06:09:34.736782 6248 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1204 06:09:34.736822 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.154793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.154835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.154846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.154873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.154882 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.257159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.257192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.257200 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.257213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.257224 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.479004 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls"] Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.479446 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.481773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.481798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.481807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.481818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.481826 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.482539 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.482873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.482906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.482917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.482931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.482944 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.485149 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.496239 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: E1204 06:09:36.496486 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.500380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.500639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.500783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.500953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.501187 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.508843 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: E1204 06:09:36.514979 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.518366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.518429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.518438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.518451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.518462 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.522645 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: E1204 06:09:36.531019 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.535062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.535101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.535116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.535137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.535151 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.536093 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.547481 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: E1204 06:09:36.550115 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.554150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.554180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.554190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.554202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.554211 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.558982 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.571067 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: E1204 06:09:36.571193 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: E1204 06:09:36.571307 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.580933 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrx7n\" (UniqueName: \"kubernetes.io/projected/e0d1459e-480d-42bf-bdc2-0f2c40a73eb6-kube-api-access-nrx7n\") pod \"ovnkube-control-plane-749d76644c-ss7ls\" (UID: \"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.580992 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0d1459e-480d-42bf-bdc2-0f2c40a73eb6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ss7ls\" (UID: \"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.581033 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0d1459e-480d-42bf-bdc2-0f2c40a73eb6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ss7ls\" (UID: \"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.581071 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0d1459e-480d-42bf-bdc2-0f2c40a73eb6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ss7ls\" (UID: \"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.583023 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.584000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.584066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.584078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.584096 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.584106 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.596163 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.608011 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.618629 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.629467 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.641001 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.653295 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.672673 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94c90cd89adbc134bb8ccaf0c887a53a5993be3277e3f80bab2ccfddb9db0800\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:33Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1204 06:09:33.330099 6119 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 06:09:33.330120 6119 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 06:09:33.330125 6119 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 06:09:33.330136 6119 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 06:09:33.330145 6119 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 06:09:33.330155 6119 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1204 06:09:33.330159 6119 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1204 06:09:33.330192 6119 factory.go:656] Stopping watch factory\\\\nI1204 06:09:33.330202 6119 ovnkube.go:599] Stopped ovnkube\\\\nI1204 06:09:33.330223 6119 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 06:09:33.330229 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 06:09:33.330236 6119 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 06:09:33.330241 6119 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 06:09:33.330246 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 06:09:33.330251 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:35Z\\\",\\\"message\\\":\\\"protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 06:09:34.736750 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 06:09:34.736738 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 06:09:34.736782 6248 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1204 06:09:34.736822 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.682208 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrx7n\" (UniqueName: \"kubernetes.io/projected/e0d1459e-480d-42bf-bdc2-0f2c40a73eb6-kube-api-access-nrx7n\") pod \"ovnkube-control-plane-749d76644c-ss7ls\" (UID: \"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.682238 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0d1459e-480d-42bf-bdc2-0f2c40a73eb6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ss7ls\" (UID: \"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.682254 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0d1459e-480d-42bf-bdc2-0f2c40a73eb6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ss7ls\" (UID: \"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.682277 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0d1459e-480d-42bf-bdc2-0f2c40a73eb6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ss7ls\" (UID: \"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.683070 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e0d1459e-480d-42bf-bdc2-0f2c40a73eb6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ss7ls\" (UID: \"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.683170 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e0d1459e-480d-42bf-bdc2-0f2c40a73eb6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ss7ls\" (UID: \"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.686867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.686900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.686911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.686929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.686941 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.687097 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e0d1459e-480d-42bf-bdc2-0f2c40a73eb6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ss7ls\" (UID: \"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.700975 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrx7n\" (UniqueName: \"kubernetes.io/projected/e0d1459e-480d-42bf-bdc2-0f2c40a73eb6-kube-api-access-nrx7n\") pod \"ovnkube-control-plane-749d76644c-ss7ls\" (UID: \"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.709606 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:36 crc kubenswrapper[4832]: E1204 06:09:36.709751 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.710241 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:36 crc kubenswrapper[4832]: E1204 06:09:36.710319 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.710411 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:36 crc kubenswrapper[4832]: E1204 06:09:36.710471 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.788740 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.788785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.788794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.788808 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.788817 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.793963 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.891147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.891198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.891207 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.891222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.891231 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.935215 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/1.log" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.938694 4832 scope.go:117] "RemoveContainer" containerID="b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e" Dec 04 06:09:36 crc kubenswrapper[4832]: E1204 06:09:36.938847 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.939732 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" event={"ID":"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6","Type":"ContainerStarted","Data":"ff163afb5558f45bcde23003602b0d654f6dde065311df7ada09df2a25efdee7"} Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.950181 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.960772 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.969948 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.980294 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.992290 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:36Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.993331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.993363 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.993372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.993405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:36 crc kubenswrapper[4832]: I1204 06:09:36.993419 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:36Z","lastTransitionTime":"2025-12-04T06:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.002932 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.013227 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.023152 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.032851 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.042634 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.056092 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.068193 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.079617 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.091808 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.096170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.096242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.096259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.096286 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.096305 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:37Z","lastTransitionTime":"2025-12-04T06:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.109593 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:35Z\\\",\\\"message\\\":\\\"protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 06:09:34.736750 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 06:09:34.736738 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 06:09:34.736782 6248 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1204 06:09:34.736822 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.198816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.198876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.198886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.198900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.198913 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:37Z","lastTransitionTime":"2025-12-04T06:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.301023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.301056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.301064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.301080 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.301088 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:37Z","lastTransitionTime":"2025-12-04T06:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.403811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.403844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.403852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.403865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.403874 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:37Z","lastTransitionTime":"2025-12-04T06:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.506671 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.506707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.506716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.506735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.506744 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:37Z","lastTransitionTime":"2025-12-04T06:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.608773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.608818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.608834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.608852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.608864 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:37Z","lastTransitionTime":"2025-12-04T06:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.710863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.710900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.710909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.710922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.710931 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:37Z","lastTransitionTime":"2025-12-04T06:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.813610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.813650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.813659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.813684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.813701 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:37Z","lastTransitionTime":"2025-12-04T06:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.842643 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-ctzsn"] Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.843343 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:37 crc kubenswrapper[4832]: E1204 06:09:37.843515 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.857233 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.869643 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.892208 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.893505 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqt29\" (UniqueName: \"kubernetes.io/projected/37ab4745-26f8-4cb8-a4c4-c3064251922e-kube-api-access-wqt29\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.893579 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.905799 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.915985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.916027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.916036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.916051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.916060 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:37Z","lastTransitionTime":"2025-12-04T06:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.917344 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.930571 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.943187 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.944695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" event={"ID":"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6","Type":"ContainerStarted","Data":"a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a"} Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.958446 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.980661 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:35Z\\\",\\\"message\\\":\\\"protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 06:09:34.736750 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 06:09:34.736738 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 06:09:34.736782 6248 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1204 06:09:34.736822 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.994856 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqt29\" (UniqueName: \"kubernetes.io/projected/37ab4745-26f8-4cb8-a4c4-c3064251922e-kube-api-access-wqt29\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.994945 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:37 crc kubenswrapper[4832]: E1204 06:09:37.995058 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:09:37 crc kubenswrapper[4832]: E1204 06:09:37.995120 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs podName:37ab4745-26f8-4cb8-a4c4-c3064251922e nodeName:}" failed. No retries permitted until 2025-12-04 06:09:38.495102536 +0000 UTC m=+34.107920252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs") pod "network-metrics-daemon-ctzsn" (UID: "37ab4745-26f8-4cb8-a4c4-c3064251922e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:09:37 crc kubenswrapper[4832]: I1204 06:09:37.997677 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:37Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.008371 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:38Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.013586 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqt29\" (UniqueName: \"kubernetes.io/projected/37ab4745-26f8-4cb8-a4c4-c3064251922e-kube-api-access-wqt29\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.018611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.018752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.018862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.018949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.019034 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:38Z","lastTransitionTime":"2025-12-04T06:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.024032 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:38Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.037085 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:38Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.052480 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:38Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.065529 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:38Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.074768 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:38Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.121836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.121872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.121883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.121900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.121909 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:38Z","lastTransitionTime":"2025-12-04T06:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.224232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.224266 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.224274 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.224287 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.224295 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:38Z","lastTransitionTime":"2025-12-04T06:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.327148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.327200 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.327212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.327233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.327251 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:38Z","lastTransitionTime":"2025-12-04T06:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.398486 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.398687 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:09:54.398648777 +0000 UTC m=+50.011466523 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.398819 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.399006 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.399088 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:54.399073877 +0000 UTC m=+50.011891593 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.430034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.430100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.430115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.430140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.430154 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:38Z","lastTransitionTime":"2025-12-04T06:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.499572 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.499735 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.499767 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.499893 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.499904 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:54.499873143 +0000 UTC m=+50.112690889 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.499973 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs podName:37ab4745-26f8-4cb8-a4c4-c3064251922e nodeName:}" failed. No retries permitted until 2025-12-04 06:09:39.499955255 +0000 UTC m=+35.112772971 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs") pod "network-metrics-daemon-ctzsn" (UID: "37ab4745-26f8-4cb8-a4c4-c3064251922e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.533585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.533639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.533651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.533674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.533690 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:38Z","lastTransitionTime":"2025-12-04T06:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.600290 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.600441 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.600672 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.600721 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.600745 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.600672 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.600844 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:54.600810432 +0000 UTC m=+50.213628178 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.600849 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.600884 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.600957 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 06:09:54.600931935 +0000 UTC m=+50.213749671 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.637011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.637081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.637093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.637116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.637130 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:38Z","lastTransitionTime":"2025-12-04T06:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.710290 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.710353 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.710448 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.710577 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.710700 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:38 crc kubenswrapper[4832]: E1204 06:09:38.710901 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.740015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.740076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.740094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.740117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.740139 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:38Z","lastTransitionTime":"2025-12-04T06:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.844226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.844282 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.844302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.844328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.844348 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:38Z","lastTransitionTime":"2025-12-04T06:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.947520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.947588 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.947601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.947618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.947633 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:38Z","lastTransitionTime":"2025-12-04T06:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.952906 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" event={"ID":"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6","Type":"ContainerStarted","Data":"bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1"} Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.968591 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:38Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.984520 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:38Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:38 crc kubenswrapper[4832]: I1204 06:09:38.996013 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:38Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.012301 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.027498 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.041862 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.051771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.051824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.051841 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.051867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.051881 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:39Z","lastTransitionTime":"2025-12-04T06:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.054789 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.068940 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.082339 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.096329 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.116001 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.133305 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.149151 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.155849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.155934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.155959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.155995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.156019 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:39Z","lastTransitionTime":"2025-12-04T06:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.175246 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.194384 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.219864 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:35Z\\\",\\\"message\\\":\\\"protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 06:09:34.736750 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 06:09:34.736738 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 06:09:34.736782 6248 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1204 06:09:34.736822 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:39Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.259609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.259745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.259765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.259795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.259851 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:39Z","lastTransitionTime":"2025-12-04T06:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.362920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.362981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.362998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.363024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.363041 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:39Z","lastTransitionTime":"2025-12-04T06:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.466936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.466987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.466996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.467013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.467023 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:39Z","lastTransitionTime":"2025-12-04T06:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.512912 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:39 crc kubenswrapper[4832]: E1204 06:09:39.513243 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:09:39 crc kubenswrapper[4832]: E1204 06:09:39.513379 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs podName:37ab4745-26f8-4cb8-a4c4-c3064251922e nodeName:}" failed. No retries permitted until 2025-12-04 06:09:41.513351521 +0000 UTC m=+37.126169227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs") pod "network-metrics-daemon-ctzsn" (UID: "37ab4745-26f8-4cb8-a4c4-c3064251922e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.570215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.570258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.570269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.570285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.570295 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:39Z","lastTransitionTime":"2025-12-04T06:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.672927 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.673002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.673027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.673061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.673089 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:39Z","lastTransitionTime":"2025-12-04T06:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.710698 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:39 crc kubenswrapper[4832]: E1204 06:09:39.711005 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.776381 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.776463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.776479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.776502 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.776517 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:39Z","lastTransitionTime":"2025-12-04T06:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.879141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.879178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.879189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.879203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.879223 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:39Z","lastTransitionTime":"2025-12-04T06:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.981255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.981287 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.981296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.981310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:39 crc kubenswrapper[4832]: I1204 06:09:39.981320 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:39Z","lastTransitionTime":"2025-12-04T06:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.084474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.084513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.084522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.084536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.084548 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:40Z","lastTransitionTime":"2025-12-04T06:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.183183 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.187251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.187286 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.187294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.187307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.187316 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:40Z","lastTransitionTime":"2025-12-04T06:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.197234 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.214978 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.252182 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.271504 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.280121 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.289198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.289237 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.289247 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.289263 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.289273 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:40Z","lastTransitionTime":"2025-12-04T06:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.291576 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.302067 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.316041 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.333836 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:35Z\\\",\\\"message\\\":\\\"protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 06:09:34.736750 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 06:09:34.736738 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 06:09:34.736782 6248 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1204 06:09:34.736822 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.353501 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.367360 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.377911 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.389463 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.391066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.391091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.391101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.391117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.391127 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:40Z","lastTransitionTime":"2025-12-04T06:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.401223 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.411825 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.421191 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:40Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.492912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.492953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.492964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.492980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.492990 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:40Z","lastTransitionTime":"2025-12-04T06:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.595095 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.595140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.595170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.595192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.595208 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:40Z","lastTransitionTime":"2025-12-04T06:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.697146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.697186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.697201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.697215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.697226 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:40Z","lastTransitionTime":"2025-12-04T06:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.710019 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.710111 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:40 crc kubenswrapper[4832]: E1204 06:09:40.710125 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.710211 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:40 crc kubenswrapper[4832]: E1204 06:09:40.710313 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:40 crc kubenswrapper[4832]: E1204 06:09:40.710429 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.798857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.798900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.798911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.798928 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.798939 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:40Z","lastTransitionTime":"2025-12-04T06:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.901257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.901306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.901320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.901340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:40 crc kubenswrapper[4832]: I1204 06:09:40.901353 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:40Z","lastTransitionTime":"2025-12-04T06:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.004099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.004136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.004145 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.004159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.004168 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:41Z","lastTransitionTime":"2025-12-04T06:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.107085 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.107142 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.107153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.107167 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.107177 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:41Z","lastTransitionTime":"2025-12-04T06:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.210219 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.210257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.210265 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.210283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.210294 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:41Z","lastTransitionTime":"2025-12-04T06:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.312777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.312844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.312865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.312893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.312908 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:41Z","lastTransitionTime":"2025-12-04T06:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.415472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.415517 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.415528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.415543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.415555 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:41Z","lastTransitionTime":"2025-12-04T06:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.517270 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.517332 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.517354 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.517378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.517425 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:41Z","lastTransitionTime":"2025-12-04T06:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.532983 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:41 crc kubenswrapper[4832]: E1204 06:09:41.533119 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:09:41 crc kubenswrapper[4832]: E1204 06:09:41.533170 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs podName:37ab4745-26f8-4cb8-a4c4-c3064251922e nodeName:}" failed. No retries permitted until 2025-12-04 06:09:45.533156327 +0000 UTC m=+41.145974033 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs") pod "network-metrics-daemon-ctzsn" (UID: "37ab4745-26f8-4cb8-a4c4-c3064251922e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.619873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.619962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.619977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.619997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.620008 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:41Z","lastTransitionTime":"2025-12-04T06:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.710012 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:41 crc kubenswrapper[4832]: E1204 06:09:41.710173 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.722082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.722119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.722128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.722143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.722153 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:41Z","lastTransitionTime":"2025-12-04T06:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.825216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.825260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.825269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.825284 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.825295 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:41Z","lastTransitionTime":"2025-12-04T06:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.927261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.927309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.927320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.927334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:41 crc kubenswrapper[4832]: I1204 06:09:41.927343 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:41Z","lastTransitionTime":"2025-12-04T06:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.029655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.029738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.029777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.029810 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.029834 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:42Z","lastTransitionTime":"2025-12-04T06:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.131884 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.131931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.131942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.131960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.131972 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:42Z","lastTransitionTime":"2025-12-04T06:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.234644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.234693 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.234717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.234741 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.234757 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:42Z","lastTransitionTime":"2025-12-04T06:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.336958 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.336995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.337024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.337047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.337062 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:42Z","lastTransitionTime":"2025-12-04T06:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.440028 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.440073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.440085 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.440107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.440128 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:42Z","lastTransitionTime":"2025-12-04T06:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.543029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.543068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.543079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.543094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.543104 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:42Z","lastTransitionTime":"2025-12-04T06:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.645520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.645556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.645584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.645599 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.645608 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:42Z","lastTransitionTime":"2025-12-04T06:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.710511 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.710560 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.710511 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:42 crc kubenswrapper[4832]: E1204 06:09:42.710690 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:42 crc kubenswrapper[4832]: E1204 06:09:42.710831 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:42 crc kubenswrapper[4832]: E1204 06:09:42.710965 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.747252 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.747307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.747324 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.747347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.747365 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:42Z","lastTransitionTime":"2025-12-04T06:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.851066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.851118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.851135 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.851158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.851174 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:42Z","lastTransitionTime":"2025-12-04T06:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.953575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.953626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.953643 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.953733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:42 crc kubenswrapper[4832]: I1204 06:09:42.953755 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:42Z","lastTransitionTime":"2025-12-04T06:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.055731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.055767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.055776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.055789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.055798 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:43Z","lastTransitionTime":"2025-12-04T06:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.157904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.157938 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.157946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.157960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.157969 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:43Z","lastTransitionTime":"2025-12-04T06:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.260249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.260287 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.260297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.260310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.260318 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:43Z","lastTransitionTime":"2025-12-04T06:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.362890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.362951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.362960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.362975 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.362985 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:43Z","lastTransitionTime":"2025-12-04T06:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.465450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.465476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.465484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.465499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.465511 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:43Z","lastTransitionTime":"2025-12-04T06:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.567764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.567790 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.567798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.567811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.567820 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:43Z","lastTransitionTime":"2025-12-04T06:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.670728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.670783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.670812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.670839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.670856 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:43Z","lastTransitionTime":"2025-12-04T06:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.710329 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:43 crc kubenswrapper[4832]: E1204 06:09:43.710472 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.773461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.773497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.773509 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.773556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.773567 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:43Z","lastTransitionTime":"2025-12-04T06:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.875442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.875510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.875526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.875577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.875594 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:43Z","lastTransitionTime":"2025-12-04T06:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.977260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.977293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.977303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.977317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:43 crc kubenswrapper[4832]: I1204 06:09:43.977328 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:43Z","lastTransitionTime":"2025-12-04T06:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.079355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.079419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.079430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.079444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.079454 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:44Z","lastTransitionTime":"2025-12-04T06:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.182172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.182257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.182269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.182284 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.182293 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:44Z","lastTransitionTime":"2025-12-04T06:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.284127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.284167 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.284180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.284199 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.284210 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:44Z","lastTransitionTime":"2025-12-04T06:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.386240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.386358 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.386440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.386491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.386564 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:44Z","lastTransitionTime":"2025-12-04T06:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.489170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.489225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.489233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.489248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.489275 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:44Z","lastTransitionTime":"2025-12-04T06:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.591433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.591502 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.591514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.591536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.591548 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:44Z","lastTransitionTime":"2025-12-04T06:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.694004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.694089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.694109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.694543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.695086 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:44Z","lastTransitionTime":"2025-12-04T06:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.709517 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.709583 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.709644 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:44 crc kubenswrapper[4832]: E1204 06:09:44.709777 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:44 crc kubenswrapper[4832]: E1204 06:09:44.709983 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:44 crc kubenswrapper[4832]: E1204 06:09:44.710133 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.725104 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.738178 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.750171 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.761154 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.775116 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.787466 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.798245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.798292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.798301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.798316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.798325 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:44Z","lastTransitionTime":"2025-12-04T06:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.800545 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.811500 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.824961 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.834781 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.847715 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.859537 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.872252 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.884413 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.894595 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.900182 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.900214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.900223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.900240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.900250 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:44Z","lastTransitionTime":"2025-12-04T06:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:44 crc kubenswrapper[4832]: I1204 06:09:44.911773 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:35Z\\\",\\\"message\\\":\\\"protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 06:09:34.736750 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 06:09:34.736738 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 06:09:34.736782 6248 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1204 06:09:34.736822 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:44Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.002676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.002707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.002716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.002746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.002755 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:45Z","lastTransitionTime":"2025-12-04T06:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.105658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.105691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.105701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.105715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.105724 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:45Z","lastTransitionTime":"2025-12-04T06:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.208002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.208063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.208071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.208086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.208097 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:45Z","lastTransitionTime":"2025-12-04T06:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.311431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.311488 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.311497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.311514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.311523 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:45Z","lastTransitionTime":"2025-12-04T06:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.414058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.414122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.414134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.414152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.414161 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:45Z","lastTransitionTime":"2025-12-04T06:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.516843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.516895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.516908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.516925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.516937 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:45Z","lastTransitionTime":"2025-12-04T06:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.572786 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:45 crc kubenswrapper[4832]: E1204 06:09:45.572965 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:09:45 crc kubenswrapper[4832]: E1204 06:09:45.573064 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs podName:37ab4745-26f8-4cb8-a4c4-c3064251922e nodeName:}" failed. No retries permitted until 2025-12-04 06:09:53.573042674 +0000 UTC m=+49.185860380 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs") pod "network-metrics-daemon-ctzsn" (UID: "37ab4745-26f8-4cb8-a4c4-c3064251922e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.619332 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.619485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.619500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.619525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.619537 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:45Z","lastTransitionTime":"2025-12-04T06:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.710521 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:45 crc kubenswrapper[4832]: E1204 06:09:45.710688 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.722204 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.722256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.722267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.722283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.722296 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:45Z","lastTransitionTime":"2025-12-04T06:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.824684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.824723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.824733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.824747 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.824756 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:45Z","lastTransitionTime":"2025-12-04T06:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.927121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.927170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.927186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.927202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:45 crc kubenswrapper[4832]: I1204 06:09:45.927211 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:45Z","lastTransitionTime":"2025-12-04T06:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.029514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.029555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.029567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.029582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.029593 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.132541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.132601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.132614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.132630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.132676 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.235379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.235561 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.235579 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.235604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.235619 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.342022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.342073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.342089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.342111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.342126 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.444289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.444323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.444331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.444346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.444355 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.546472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.546519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.546531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.546548 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.546561 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.649295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.649356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.649371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.649411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.649424 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.710290 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.710356 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.710291 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:46 crc kubenswrapper[4832]: E1204 06:09:46.710548 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:46 crc kubenswrapper[4832]: E1204 06:09:46.710484 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:46 crc kubenswrapper[4832]: E1204 06:09:46.710861 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.752219 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.752286 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.752300 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.752327 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.752344 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.796558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.796628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.796640 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.796665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.796686 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: E1204 06:09:46.814173 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:46Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.820138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.820179 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.820190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.820210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.820225 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: E1204 06:09:46.842873 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:46Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.848596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.848643 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.848655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.848671 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.848679 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: E1204 06:09:46.869808 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:46Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.875878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.875943 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.875961 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.875989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.876012 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: E1204 06:09:46.891351 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:46Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.896836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.896893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.896907 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.896927 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.896941 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:46 crc kubenswrapper[4832]: E1204 06:09:46.910547 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:46Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:46 crc kubenswrapper[4832]: E1204 06:09:46.910699 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.913201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.913242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.913255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.913280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:46 crc kubenswrapper[4832]: I1204 06:09:46.913296 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:46Z","lastTransitionTime":"2025-12-04T06:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.017377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.017519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.017541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.017572 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.017592 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:47Z","lastTransitionTime":"2025-12-04T06:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.121386 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.121491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.121515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.121543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.121565 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:47Z","lastTransitionTime":"2025-12-04T06:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.224691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.224772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.224796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.224826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.224849 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:47Z","lastTransitionTime":"2025-12-04T06:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.327614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.327650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.327660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.327676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.327686 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:47Z","lastTransitionTime":"2025-12-04T06:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.429610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.429651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.429660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.429674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.429683 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:47Z","lastTransitionTime":"2025-12-04T06:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.532528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.532602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.532622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.532648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.532671 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:47Z","lastTransitionTime":"2025-12-04T06:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.634915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.634956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.634967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.634980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.634989 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:47Z","lastTransitionTime":"2025-12-04T06:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.709901 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:47 crc kubenswrapper[4832]: E1204 06:09:47.710027 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.737534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.737593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.737613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.737635 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.737652 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:47Z","lastTransitionTime":"2025-12-04T06:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.840230 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.840316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.840332 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.840382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.840435 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:47Z","lastTransitionTime":"2025-12-04T06:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.942977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.943025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.943036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.943054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:47 crc kubenswrapper[4832]: I1204 06:09:47.943067 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:47Z","lastTransitionTime":"2025-12-04T06:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.045880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.045928 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.045937 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.045953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.045963 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:48Z","lastTransitionTime":"2025-12-04T06:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.148594 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.148704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.148722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.148744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.148761 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:48Z","lastTransitionTime":"2025-12-04T06:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.250612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.250655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.250666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.250702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.250711 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:48Z","lastTransitionTime":"2025-12-04T06:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.353221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.353307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.353323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.353342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.353356 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:48Z","lastTransitionTime":"2025-12-04T06:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.455777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.455834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.455851 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.455872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.455886 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:48Z","lastTransitionTime":"2025-12-04T06:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.558709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.558764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.558780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.558801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.558815 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:48Z","lastTransitionTime":"2025-12-04T06:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.661568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.661628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.661645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.661667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.661684 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:48Z","lastTransitionTime":"2025-12-04T06:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.710558 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.710590 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.710578 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:48 crc kubenswrapper[4832]: E1204 06:09:48.710741 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:48 crc kubenswrapper[4832]: E1204 06:09:48.710920 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:48 crc kubenswrapper[4832]: E1204 06:09:48.711072 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.764453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.764524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.764546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.764575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.764600 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:48Z","lastTransitionTime":"2025-12-04T06:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.867490 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.867552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.867570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.867593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.867609 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:48Z","lastTransitionTime":"2025-12-04T06:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.971018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.971099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.971124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.971148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:48 crc kubenswrapper[4832]: I1204 06:09:48.971165 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:48Z","lastTransitionTime":"2025-12-04T06:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.073896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.073944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.073959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.073979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.073993 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:49Z","lastTransitionTime":"2025-12-04T06:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.176730 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.176771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.176780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.176793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.176803 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:49Z","lastTransitionTime":"2025-12-04T06:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.278754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.278799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.278816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.278832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.278873 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:49Z","lastTransitionTime":"2025-12-04T06:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.381493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.381537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.381546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.381561 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.381571 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:49Z","lastTransitionTime":"2025-12-04T06:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.483790 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.483872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.483883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.483899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.483910 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:49Z","lastTransitionTime":"2025-12-04T06:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.586253 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.586292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.586301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.586317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.586325 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:49Z","lastTransitionTime":"2025-12-04T06:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.689009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.689055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.689064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.689077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.689084 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:49Z","lastTransitionTime":"2025-12-04T06:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.709899 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:49 crc kubenswrapper[4832]: E1204 06:09:49.710373 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.710497 4832 scope.go:117] "RemoveContainer" containerID="b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.791121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.791332 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.791340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.791355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.791364 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:49Z","lastTransitionTime":"2025-12-04T06:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.893206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.893240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.893249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.893262 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.893272 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:49Z","lastTransitionTime":"2025-12-04T06:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.985538 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/1.log" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.988024 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae"} Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.988410 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.996018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.996190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.996217 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.996249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:49 crc kubenswrapper[4832]: I1204 06:09:49.996275 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:49Z","lastTransitionTime":"2025-12-04T06:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.001544 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:49Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.011556 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.026373 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.040701 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.052141 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.065043 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.079526 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.091856 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.098024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.098069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.098077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.098092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.098103 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:50Z","lastTransitionTime":"2025-12-04T06:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.133438 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:35Z\\\",\\\"message\\\":\\\"protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 06:09:34.736750 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 06:09:34.736738 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 06:09:34.736782 6248 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1204 06:09:34.736822 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.145629 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.159371 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.168217 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.180794 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.191663 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.200258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.200296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.200309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.200326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.200339 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:50Z","lastTransitionTime":"2025-12-04T06:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.203628 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.214081 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:50Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.302908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.302950 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.302963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.303008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.303020 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:50Z","lastTransitionTime":"2025-12-04T06:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.404845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.404889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.404899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.404913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.404923 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:50Z","lastTransitionTime":"2025-12-04T06:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.506866 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.506899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.506907 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.506921 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.506930 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:50Z","lastTransitionTime":"2025-12-04T06:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.609718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.609777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.609793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.609815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.609832 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:50Z","lastTransitionTime":"2025-12-04T06:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.710348 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.710367 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.710471 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:50 crc kubenswrapper[4832]: E1204 06:09:50.710564 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:50 crc kubenswrapper[4832]: E1204 06:09:50.710737 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:50 crc kubenswrapper[4832]: E1204 06:09:50.710853 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.712220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.712254 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.712265 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.712279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.712289 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:50Z","lastTransitionTime":"2025-12-04T06:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.815687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.815734 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.815744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.815784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.815798 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:50Z","lastTransitionTime":"2025-12-04T06:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.917968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.918007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.918019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.918036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.918048 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:50Z","lastTransitionTime":"2025-12-04T06:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.993384 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/2.log" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.994297 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/1.log" Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.998124 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae" exitCode=1 Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.998168 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae"} Dec 04 06:09:50 crc kubenswrapper[4832]: I1204 06:09:50.998205 4832 scope.go:117] "RemoveContainer" containerID="b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:50.998997 4832 scope.go:117] "RemoveContainer" containerID="1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae" Dec 04 06:09:51 crc kubenswrapper[4832]: E1204 06:09:50.999239 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.013544 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.025216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.025261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.025310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.025332 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.025346 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:51Z","lastTransitionTime":"2025-12-04T06:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.029937 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.045743 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.055859 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.066259 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.077574 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.090096 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.107106 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4146af19f73073d8f0c30b06d47ad2b9ef18b56eaf1010eb3283b1e3e196e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:35Z\\\",\\\"message\\\":\\\"protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2a3fb1a3-a476-4e14-bcf5-fb79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 06:09:34.736750 6248 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 06:09:34.736738 6248 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 06:09:34.736782 6248 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.109:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1f62a432-33b9-495d-83b2-d1dbe6961325}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1204 06:09:34.736822 6248 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:50Z\\\",\\\"message\\\":\\\"et-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00067647b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1204 06:09:50.499543 6467 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.119966 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.128176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.128212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.128223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.128237 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.128250 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:51Z","lastTransitionTime":"2025-12-04T06:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.131055 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.143853 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.154748 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.167723 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.181572 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.191592 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.200445 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:51Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.230428 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.230476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.230487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.230503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.230513 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:51Z","lastTransitionTime":"2025-12-04T06:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.333809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.333924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.333951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.333982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.334008 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:51Z","lastTransitionTime":"2025-12-04T06:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.436774 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.436817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.436829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.436845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.436856 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:51Z","lastTransitionTime":"2025-12-04T06:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.539357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.539436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.539449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.539467 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.539479 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:51Z","lastTransitionTime":"2025-12-04T06:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.642233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.642316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.642343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.642375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.642436 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:51Z","lastTransitionTime":"2025-12-04T06:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.710516 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:51 crc kubenswrapper[4832]: E1204 06:09:51.710665 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.745281 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.745329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.745340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.745357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.745370 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:51Z","lastTransitionTime":"2025-12-04T06:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.848109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.848156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.848173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.848192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.848204 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:51Z","lastTransitionTime":"2025-12-04T06:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.950987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.951026 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.951036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.951053 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:51 crc kubenswrapper[4832]: I1204 06:09:51.951069 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:51Z","lastTransitionTime":"2025-12-04T06:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.002307 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/2.log" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.005205 4832 scope.go:117] "RemoveContainer" containerID="1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae" Dec 04 06:09:52 crc kubenswrapper[4832]: E1204 06:09:52.005336 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.016098 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.024762 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.036770 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.047927 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.054103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.054139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.054149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.054166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.054176 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:52Z","lastTransitionTime":"2025-12-04T06:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.056863 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.067608 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.077675 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.085854 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.098661 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.111480 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.122615 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.132484 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.147272 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:50Z\\\",\\\"message\\\":\\\"et-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00067647b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1204 06:09:50.499543 6467 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.156071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.156110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.156119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.156159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.156174 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:52Z","lastTransitionTime":"2025-12-04T06:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.158126 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.168603 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.177299 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:52Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.259603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.259666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.259690 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.259718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.259742 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:52Z","lastTransitionTime":"2025-12-04T06:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.363010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.363088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.363112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.363135 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.363152 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:52Z","lastTransitionTime":"2025-12-04T06:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.465056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.465134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.465157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.465187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.465211 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:52Z","lastTransitionTime":"2025-12-04T06:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.568132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.568189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.568202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.568221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.568574 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:52Z","lastTransitionTime":"2025-12-04T06:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.672044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.672116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.672138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.672166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.672188 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:52Z","lastTransitionTime":"2025-12-04T06:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.710124 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.710223 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:52 crc kubenswrapper[4832]: E1204 06:09:52.710292 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.710372 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:52 crc kubenswrapper[4832]: E1204 06:09:52.710589 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:52 crc kubenswrapper[4832]: E1204 06:09:52.710759 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.775842 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.775902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.775918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.775942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.775959 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:52Z","lastTransitionTime":"2025-12-04T06:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.878121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.878154 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.878162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.878175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.878184 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:52Z","lastTransitionTime":"2025-12-04T06:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.981198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.981246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.981258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.981278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:52 crc kubenswrapper[4832]: I1204 06:09:52.981291 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:52Z","lastTransitionTime":"2025-12-04T06:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.083791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.083827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.083838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.083853 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.083863 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:53Z","lastTransitionTime":"2025-12-04T06:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.186376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.186435 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.186445 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.186459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.186468 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:53Z","lastTransitionTime":"2025-12-04T06:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.289056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.289098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.289109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.289129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.289141 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:53Z","lastTransitionTime":"2025-12-04T06:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.392027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.392072 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.392083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.392104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.392115 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:53Z","lastTransitionTime":"2025-12-04T06:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.494471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.494536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.494559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.494587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.494608 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:53Z","lastTransitionTime":"2025-12-04T06:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.597077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.597140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.597158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.597183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.597202 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:53Z","lastTransitionTime":"2025-12-04T06:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.670345 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:53 crc kubenswrapper[4832]: E1204 06:09:53.670524 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:09:53 crc kubenswrapper[4832]: E1204 06:09:53.670582 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs podName:37ab4745-26f8-4cb8-a4c4-c3064251922e nodeName:}" failed. No retries permitted until 2025-12-04 06:10:09.670568098 +0000 UTC m=+65.283385804 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs") pod "network-metrics-daemon-ctzsn" (UID: "37ab4745-26f8-4cb8-a4c4-c3064251922e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.700248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.700367 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.700446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.700507 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.700552 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:53Z","lastTransitionTime":"2025-12-04T06:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.709486 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:53 crc kubenswrapper[4832]: E1204 06:09:53.709634 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.804313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.804380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.804430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.804456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.804473 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:53Z","lastTransitionTime":"2025-12-04T06:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.906570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.906597 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.906606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.906619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:53 crc kubenswrapper[4832]: I1204 06:09:53.906627 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:53Z","lastTransitionTime":"2025-12-04T06:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.008494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.008529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.008538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.008551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.008560 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:54Z","lastTransitionTime":"2025-12-04T06:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.111127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.111184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.111205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.111236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.111260 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:54Z","lastTransitionTime":"2025-12-04T06:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.213722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.213754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.213764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.213780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.213790 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:54Z","lastTransitionTime":"2025-12-04T06:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.317772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.317850 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.317868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.317894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.317911 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:54Z","lastTransitionTime":"2025-12-04T06:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.420261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.420299 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.420308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.420335 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.420344 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:54Z","lastTransitionTime":"2025-12-04T06:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.480258 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.480429 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:10:26.480377645 +0000 UTC m=+82.093195351 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.480594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.480739 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.480795 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:10:26.480786315 +0000 UTC m=+82.093604021 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.523215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.523261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.523273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.523294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.523304 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:54Z","lastTransitionTime":"2025-12-04T06:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.581568 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.581700 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.581754 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:10:26.581741214 +0000 UTC m=+82.194558920 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.626021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.626074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.626087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.626109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.626121 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:54Z","lastTransitionTime":"2025-12-04T06:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.683059 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.683134 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.683271 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.683302 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.683314 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.683338 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.683365 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.683376 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 06:10:26.6833589 +0000 UTC m=+82.296176606 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.683383 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.683468 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 06:10:26.683447522 +0000 UTC m=+82.296265278 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.709878 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.710040 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.710601 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.710801 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.711036 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:54 crc kubenswrapper[4832]: E1204 06:09:54.711146 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.723982 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.728467 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.728502 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.728513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.728530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.728544 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:54Z","lastTransitionTime":"2025-12-04T06:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.737750 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.750792 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.761223 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.773086 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.784054 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.795248 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.811652 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:50Z\\\",\\\"message\\\":\\\"et-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00067647b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1204 06:09:50.499543 6467 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.822297 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.830659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.830700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.830712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.830728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.830739 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:54Z","lastTransitionTime":"2025-12-04T06:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.832558 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.841666 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.852918 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.864357 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.874468 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.884346 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.893512 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:54Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.932446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.932513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.932532 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.932547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:54 crc kubenswrapper[4832]: I1204 06:09:54.932556 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:54Z","lastTransitionTime":"2025-12-04T06:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.034816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.034872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.034889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.034914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.034931 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:55Z","lastTransitionTime":"2025-12-04T06:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.137530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.137603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.137626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.137656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.137681 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:55Z","lastTransitionTime":"2025-12-04T06:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.240054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.240106 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.240125 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.240164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.240201 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:55Z","lastTransitionTime":"2025-12-04T06:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.342927 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.343079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.343103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.343133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.343154 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:55Z","lastTransitionTime":"2025-12-04T06:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.446101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.446140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.446149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.446164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.446174 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:55Z","lastTransitionTime":"2025-12-04T06:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.548797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.548868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.548877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.548899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.548910 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:55Z","lastTransitionTime":"2025-12-04T06:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.652375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.652468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.652484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.652511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.652527 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:55Z","lastTransitionTime":"2025-12-04T06:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.710250 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:55 crc kubenswrapper[4832]: E1204 06:09:55.710443 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.756320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.756383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.756415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.756441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.756454 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:55Z","lastTransitionTime":"2025-12-04T06:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.859370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.859436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.859448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.859464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.859475 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:55Z","lastTransitionTime":"2025-12-04T06:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.961880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.961933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.961944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.961965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:55 crc kubenswrapper[4832]: I1204 06:09:55.961976 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:55Z","lastTransitionTime":"2025-12-04T06:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.064652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.064695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.064706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.064724 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.064735 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:56Z","lastTransitionTime":"2025-12-04T06:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.168296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.168620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.168629 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.168644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.168653 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:56Z","lastTransitionTime":"2025-12-04T06:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.270450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.270524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.270536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.270551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.270562 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:56Z","lastTransitionTime":"2025-12-04T06:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.372615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.372663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.372681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.372699 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.372711 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:56Z","lastTransitionTime":"2025-12-04T06:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.475282 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.475319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.475332 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.475350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.475362 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:56Z","lastTransitionTime":"2025-12-04T06:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.577998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.578057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.578067 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.578084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.578095 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:56Z","lastTransitionTime":"2025-12-04T06:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.680348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.680383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.680430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.680449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.680459 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:56Z","lastTransitionTime":"2025-12-04T06:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.710587 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.710656 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.710665 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:56 crc kubenswrapper[4832]: E1204 06:09:56.710719 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:56 crc kubenswrapper[4832]: E1204 06:09:56.710758 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:56 crc kubenswrapper[4832]: E1204 06:09:56.710803 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.782898 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.783005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.783031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.783064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.783087 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:56Z","lastTransitionTime":"2025-12-04T06:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.886408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.886457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.886470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.886487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.886497 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:56Z","lastTransitionTime":"2025-12-04T06:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.900341 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.917683 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:56Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.917988 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.931097 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:56Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.945861 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:56Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.956637 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:56Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.970934 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:56Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.984599 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:56Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.988328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.988376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.988408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.988428 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.988441 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:56Z","lastTransitionTime":"2025-12-04T06:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:56 crc kubenswrapper[4832]: I1204 06:09:56.997360 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:56Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.018208 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:50Z\\\",\\\"message\\\":\\\"et-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00067647b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1204 06:09:50.499543 6467 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.031975 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.046616 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.057422 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.069458 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.084255 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.094989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.095027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.095038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.095056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.095068 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.099340 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.112589 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.123069 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.154817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.154859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.154867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.154885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.154894 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: E1204 06:09:57.168316 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.172161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.172215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.172232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.172254 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.172268 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: E1204 06:09:57.186512 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.190701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.190765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.190781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.190807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.190824 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: E1204 06:09:57.205556 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.209043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.209073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.209081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.209094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.209102 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: E1204 06:09:57.230751 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.243008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.243080 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.243096 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.243117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.243132 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: E1204 06:09:57.261502 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:09:57Z is after 2025-08-24T17:21:41Z" Dec 04 06:09:57 crc kubenswrapper[4832]: E1204 06:09:57.261615 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.262897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.262918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.262926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.262938 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.262946 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.365418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.365459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.365471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.365488 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.365499 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.467754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.467791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.467801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.467816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.467824 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.571202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.571260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.571279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.571305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.571322 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.677940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.678042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.678057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.678078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.678092 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.709788 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:57 crc kubenswrapper[4832]: E1204 06:09:57.709966 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.780798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.780861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.780879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.780901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.780918 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.884523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.884604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.884641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.884677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.884708 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.987955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.988038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.988059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.988088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:57 crc kubenswrapper[4832]: I1204 06:09:57.988107 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:57Z","lastTransitionTime":"2025-12-04T06:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.091481 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.091531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.091541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.091555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.091563 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:58Z","lastTransitionTime":"2025-12-04T06:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.194168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.194218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.194231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.194406 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.194422 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:58Z","lastTransitionTime":"2025-12-04T06:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.296325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.296473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.296489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.296519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.296537 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:58Z","lastTransitionTime":"2025-12-04T06:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.399432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.399499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.399517 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.399542 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.399560 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:58Z","lastTransitionTime":"2025-12-04T06:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.502942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.503009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.503022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.503041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.503059 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:58Z","lastTransitionTime":"2025-12-04T06:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.605648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.605701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.605717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.605738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.605752 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:58Z","lastTransitionTime":"2025-12-04T06:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.707732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.707770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.707787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.707807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.707817 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:58Z","lastTransitionTime":"2025-12-04T06:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.710263 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:09:58 crc kubenswrapper[4832]: E1204 06:09:58.710345 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.710367 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.710570 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:09:58 crc kubenswrapper[4832]: E1204 06:09:58.710585 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:09:58 crc kubenswrapper[4832]: E1204 06:09:58.710624 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.812130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.812173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.812186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.812202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.812217 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:58Z","lastTransitionTime":"2025-12-04T06:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.915044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.915113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.915123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.915139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:58 crc kubenswrapper[4832]: I1204 06:09:58.915148 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:58Z","lastTransitionTime":"2025-12-04T06:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.017298 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.017340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.017351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.017370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.017380 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:59Z","lastTransitionTime":"2025-12-04T06:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.119276 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.119312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.119321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.119334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.119343 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:59Z","lastTransitionTime":"2025-12-04T06:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.225775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.225843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.225867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.225897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.225925 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:59Z","lastTransitionTime":"2025-12-04T06:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.328455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.328493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.328501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.328515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.328524 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:59Z","lastTransitionTime":"2025-12-04T06:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.431137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.431183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.431195 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.431213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.431226 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:59Z","lastTransitionTime":"2025-12-04T06:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.534169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.534216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.534226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.534246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.534254 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:59Z","lastTransitionTime":"2025-12-04T06:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.637301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.637360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.637383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.637442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.637466 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:59Z","lastTransitionTime":"2025-12-04T06:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.710275 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:09:59 crc kubenswrapper[4832]: E1204 06:09:59.710512 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.740086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.740137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.740148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.740176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.740190 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:59Z","lastTransitionTime":"2025-12-04T06:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.842672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.842719 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.842728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.842743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.842755 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:59Z","lastTransitionTime":"2025-12-04T06:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.944930 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.944972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.944982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.944996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:09:59 crc kubenswrapper[4832]: I1204 06:09:59.945004 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:09:59Z","lastTransitionTime":"2025-12-04T06:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.046823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.046864 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.046872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.046884 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.046894 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:00Z","lastTransitionTime":"2025-12-04T06:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.149781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.149837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.149849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.149867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.149878 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:00Z","lastTransitionTime":"2025-12-04T06:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.253346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.253381 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.253411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.253433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.253444 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:00Z","lastTransitionTime":"2025-12-04T06:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.355131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.355232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.355246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.355266 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.355280 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:00Z","lastTransitionTime":"2025-12-04T06:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.457591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.457629 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.457641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.457666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.457677 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:00Z","lastTransitionTime":"2025-12-04T06:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.560545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.560579 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.560590 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.560607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.560619 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:00Z","lastTransitionTime":"2025-12-04T06:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.663809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.663871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.663889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.663911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.663928 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:00Z","lastTransitionTime":"2025-12-04T06:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.709651 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.709659 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.709786 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:00 crc kubenswrapper[4832]: E1204 06:10:00.709952 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:00 crc kubenswrapper[4832]: E1204 06:10:00.710346 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:00 crc kubenswrapper[4832]: E1204 06:10:00.710777 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.765926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.765971 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.765980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.765995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.766003 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:00Z","lastTransitionTime":"2025-12-04T06:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.868595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.868647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.868659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.868675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.868685 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:00Z","lastTransitionTime":"2025-12-04T06:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.971349 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.971383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.971454 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.971470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:00 crc kubenswrapper[4832]: I1204 06:10:00.971481 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:00Z","lastTransitionTime":"2025-12-04T06:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.073862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.074144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.074245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.074348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.074478 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:01Z","lastTransitionTime":"2025-12-04T06:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.176438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.176479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.176490 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.176507 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.176518 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:01Z","lastTransitionTime":"2025-12-04T06:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.279051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.279337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.279423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.279510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.279616 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:01Z","lastTransitionTime":"2025-12-04T06:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.382722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.382758 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.382768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.382785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.382796 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:01Z","lastTransitionTime":"2025-12-04T06:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.485862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.485901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.485914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.485933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.485944 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:01Z","lastTransitionTime":"2025-12-04T06:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.588960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.589009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.589020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.589041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.589053 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:01Z","lastTransitionTime":"2025-12-04T06:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.692489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.692557 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.692582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.692670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.692697 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:01Z","lastTransitionTime":"2025-12-04T06:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.710252 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:01 crc kubenswrapper[4832]: E1204 06:10:01.710532 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.795624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.795756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.795779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.795807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.795828 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:01Z","lastTransitionTime":"2025-12-04T06:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.898191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.898235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.898246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.898264 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:01 crc kubenswrapper[4832]: I1204 06:10:01.898281 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:01Z","lastTransitionTime":"2025-12-04T06:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.000799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.000878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.000904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.000936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.000960 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:02Z","lastTransitionTime":"2025-12-04T06:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.104466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.104510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.104523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.104540 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.104553 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:02Z","lastTransitionTime":"2025-12-04T06:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.207643 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.207686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.207702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.207721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.207732 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:02Z","lastTransitionTime":"2025-12-04T06:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.310318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.310381 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.310410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.310435 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.310448 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:02Z","lastTransitionTime":"2025-12-04T06:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.413805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.413894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.413911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.413934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.413951 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:02Z","lastTransitionTime":"2025-12-04T06:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.516520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.516580 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.516590 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.516607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.516618 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:02Z","lastTransitionTime":"2025-12-04T06:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.618375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.618440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.618449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.618469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.618478 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:02Z","lastTransitionTime":"2025-12-04T06:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.709887 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:02 crc kubenswrapper[4832]: E1204 06:10:02.710019 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.709886 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.710164 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:02 crc kubenswrapper[4832]: E1204 06:10:02.710215 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:02 crc kubenswrapper[4832]: E1204 06:10:02.710320 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.720329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.720362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.720371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.720383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.720417 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:02Z","lastTransitionTime":"2025-12-04T06:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.822770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.822812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.822823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.822840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.822850 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:02Z","lastTransitionTime":"2025-12-04T06:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.928433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.928466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.928475 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.928514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:02 crc kubenswrapper[4832]: I1204 06:10:02.928524 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:02Z","lastTransitionTime":"2025-12-04T06:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.031127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.031189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.031211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.031242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.031279 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:03Z","lastTransitionTime":"2025-12-04T06:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.133148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.133184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.133193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.133205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.133213 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:03Z","lastTransitionTime":"2025-12-04T06:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.235647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.235703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.235718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.235735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.235746 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:03Z","lastTransitionTime":"2025-12-04T06:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.338083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.338137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.338151 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.338169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.338179 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:03Z","lastTransitionTime":"2025-12-04T06:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.440675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.440735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.440745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.440762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.440780 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:03Z","lastTransitionTime":"2025-12-04T06:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.543092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.543130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.543140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.543153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.543162 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:03Z","lastTransitionTime":"2025-12-04T06:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.645990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.646041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.646053 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.646069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.646437 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:03Z","lastTransitionTime":"2025-12-04T06:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.709751 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:03 crc kubenswrapper[4832]: E1204 06:10:03.709899 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.710523 4832 scope.go:117] "RemoveContainer" containerID="1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae" Dec 04 06:10:03 crc kubenswrapper[4832]: E1204 06:10:03.710692 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.748990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.749029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.749060 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.749079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.749090 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:03Z","lastTransitionTime":"2025-12-04T06:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.851541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.851593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.851602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.851620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.851630 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:03Z","lastTransitionTime":"2025-12-04T06:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.953463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.953504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.953512 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.953527 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:03 crc kubenswrapper[4832]: I1204 06:10:03.953537 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:03Z","lastTransitionTime":"2025-12-04T06:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.055368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.055459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.055472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.055489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.055501 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:04Z","lastTransitionTime":"2025-12-04T06:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.158596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.158641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.158650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.158666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.158676 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:04Z","lastTransitionTime":"2025-12-04T06:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.260841 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.260897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.260912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.260931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.260942 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:04Z","lastTransitionTime":"2025-12-04T06:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.363105 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.363138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.363148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.363163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.363172 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:04Z","lastTransitionTime":"2025-12-04T06:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.465585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.465630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.465640 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.465655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.465665 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:04Z","lastTransitionTime":"2025-12-04T06:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.568563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.568606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.568617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.568638 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.568649 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:04Z","lastTransitionTime":"2025-12-04T06:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.670976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.671019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.671027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.671045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.671057 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:04Z","lastTransitionTime":"2025-12-04T06:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.709614 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.709611 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:04 crc kubenswrapper[4832]: E1204 06:10:04.709785 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.709839 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:04 crc kubenswrapper[4832]: E1204 06:10:04.709985 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:04 crc kubenswrapper[4832]: E1204 06:10:04.710109 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.724764 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.737239 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.750622 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.761359 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.772674 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.777130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.777211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.777229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.777286 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.777599 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:04Z","lastTransitionTime":"2025-12-04T06:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.787434 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.803183 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.812576 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.823443 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.834627 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.845822 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd51abf-8583-43b3-ac0e-750570e05aa1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7fc2ab450dc15d6e870ca441f100aedec9bbc8cf5085a4448eb361a2bd7971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd87caf81f133869c458e71c3c881af074e53afbb3b01e97fa3efd0002077c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9380c3f65d93675e7598bcaa6c7364057e34c7828e2898e46a03c5d0b309fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.855921 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.873104 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:50Z\\\",\\\"message\\\":\\\"et-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00067647b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1204 06:09:50.499543 6467 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.880084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.880134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.880144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.880160 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.880169 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:04Z","lastTransitionTime":"2025-12-04T06:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.884343 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.895125 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.904918 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.916521 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:04Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.981946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.982013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.982026 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.982042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:04 crc kubenswrapper[4832]: I1204 06:10:04.982077 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:04Z","lastTransitionTime":"2025-12-04T06:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.084700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.084757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.084766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.084783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.084792 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:05Z","lastTransitionTime":"2025-12-04T06:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.186794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.186833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.186849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.186869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.186883 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:05Z","lastTransitionTime":"2025-12-04T06:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.288913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.288966 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.288977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.288996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.289009 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:05Z","lastTransitionTime":"2025-12-04T06:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.391495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.391602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.391615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.391631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.391641 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:05Z","lastTransitionTime":"2025-12-04T06:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.494548 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.494596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.494607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.494623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.494636 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:05Z","lastTransitionTime":"2025-12-04T06:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.597295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.597343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.597353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.597371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.597385 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:05Z","lastTransitionTime":"2025-12-04T06:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.700531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.700580 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.700589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.700604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.700613 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:05Z","lastTransitionTime":"2025-12-04T06:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.710069 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:05 crc kubenswrapper[4832]: E1204 06:10:05.710221 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.804370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.804446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.804459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.804474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.804486 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:05Z","lastTransitionTime":"2025-12-04T06:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.907095 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.907139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.907148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.907161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:05 crc kubenswrapper[4832]: I1204 06:10:05.907171 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:05Z","lastTransitionTime":"2025-12-04T06:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.009478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.009518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.009530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.009546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.009558 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:06Z","lastTransitionTime":"2025-12-04T06:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.111977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.112022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.112036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.112057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.112072 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:06Z","lastTransitionTime":"2025-12-04T06:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.214656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.214783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.214807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.214884 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.214905 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:06Z","lastTransitionTime":"2025-12-04T06:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.318483 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.318914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.319034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.319070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.319092 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:06Z","lastTransitionTime":"2025-12-04T06:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.421686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.421756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.421779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.421807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.421829 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:06Z","lastTransitionTime":"2025-12-04T06:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.524836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.524910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.524932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.524965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.525023 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:06Z","lastTransitionTime":"2025-12-04T06:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.627696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.627744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.627758 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.627777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.627789 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:06Z","lastTransitionTime":"2025-12-04T06:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.709765 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.709836 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.709836 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:06 crc kubenswrapper[4832]: E1204 06:10:06.709915 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:06 crc kubenswrapper[4832]: E1204 06:10:06.710054 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:06 crc kubenswrapper[4832]: E1204 06:10:06.710272 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.730051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.730099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.730114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.730136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.730149 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:06Z","lastTransitionTime":"2025-12-04T06:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.833449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.833499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.833507 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.833522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.833532 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:06Z","lastTransitionTime":"2025-12-04T06:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.935940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.936031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.936041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.936057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:06 crc kubenswrapper[4832]: I1204 06:10:06.936155 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:06Z","lastTransitionTime":"2025-12-04T06:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.038755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.038813 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.038824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.038843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.038861 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.141198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.141269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.141282 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.141321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.141337 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.244083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.244119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.244128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.244142 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.244150 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.346357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.346386 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.346409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.346423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.346432 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.448648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.448698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.448714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.448734 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.448748 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.453910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.453954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.453969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.453986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.454000 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: E1204 06:10:07.468839 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:07Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.472655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.472737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.472755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.472772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.472783 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: E1204 06:10:07.484270 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:07Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.487815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.487846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.487857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.487874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.487884 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: E1204 06:10:07.505346 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:07Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.508846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.508881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.508892 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.508912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.508924 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: E1204 06:10:07.521692 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:07Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.526117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.526166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.526178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.526192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.526202 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: E1204 06:10:07.538784 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:07Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:07 crc kubenswrapper[4832]: E1204 06:10:07.538945 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.551500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.551543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.551559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.551577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.551591 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.654566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.654625 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.654642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.654666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.654682 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.710407 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:07 crc kubenswrapper[4832]: E1204 06:10:07.710595 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.757338 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.757378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.757403 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.757439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.757453 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.859937 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.859981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.859993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.860010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.860025 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.962610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.962658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.962669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.962688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:07 crc kubenswrapper[4832]: I1204 06:10:07.962701 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:07Z","lastTransitionTime":"2025-12-04T06:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.064722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.064752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.064761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.064773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.064784 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:08Z","lastTransitionTime":"2025-12-04T06:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.166701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.166742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.166757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.166776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.166788 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:08Z","lastTransitionTime":"2025-12-04T06:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.268705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.268739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.268749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.268763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.268772 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:08Z","lastTransitionTime":"2025-12-04T06:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.370877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.370910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.370919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.370934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.370943 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:08Z","lastTransitionTime":"2025-12-04T06:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.472949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.472982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.472990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.473003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.473014 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:08Z","lastTransitionTime":"2025-12-04T06:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.575490 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.575539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.575548 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.575565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.575574 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:08Z","lastTransitionTime":"2025-12-04T06:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.678414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.678457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.678470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.678487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.678497 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:08Z","lastTransitionTime":"2025-12-04T06:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.710097 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.710175 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:08 crc kubenswrapper[4832]: E1204 06:10:08.710236 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.710184 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:08 crc kubenswrapper[4832]: E1204 06:10:08.710339 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:08 crc kubenswrapper[4832]: E1204 06:10:08.710430 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.780847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.780893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.780904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.780921 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.780932 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:08Z","lastTransitionTime":"2025-12-04T06:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.882982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.883044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.883057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.883075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.883085 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:08Z","lastTransitionTime":"2025-12-04T06:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.984912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.984984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.984997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.985015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:08 crc kubenswrapper[4832]: I1204 06:10:08.985028 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:08Z","lastTransitionTime":"2025-12-04T06:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.087090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.087133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.087141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.087156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.087166 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:09Z","lastTransitionTime":"2025-12-04T06:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.189740 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.189774 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.189783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.189797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.189807 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:09Z","lastTransitionTime":"2025-12-04T06:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.292537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.292583 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.292596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.292617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.292630 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:09Z","lastTransitionTime":"2025-12-04T06:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.395049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.395092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.395104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.395119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.395129 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:09Z","lastTransitionTime":"2025-12-04T06:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.497844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.497879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.497889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.497901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.497910 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:09Z","lastTransitionTime":"2025-12-04T06:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.599778 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.599817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.599830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.599845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.599858 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:09Z","lastTransitionTime":"2025-12-04T06:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.702464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.702511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.702527 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.702549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.702566 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:09Z","lastTransitionTime":"2025-12-04T06:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.709698 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:09 crc kubenswrapper[4832]: E1204 06:10:09.709816 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.746320 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:09 crc kubenswrapper[4832]: E1204 06:10:09.746531 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:10:09 crc kubenswrapper[4832]: E1204 06:10:09.746614 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs podName:37ab4745-26f8-4cb8-a4c4-c3064251922e nodeName:}" failed. No retries permitted until 2025-12-04 06:10:41.746595546 +0000 UTC m=+97.359413252 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs") pod "network-metrics-daemon-ctzsn" (UID: "37ab4745-26f8-4cb8-a4c4-c3064251922e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.805087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.805120 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.805130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.805145 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.805155 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:09Z","lastTransitionTime":"2025-12-04T06:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.907960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.908025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.908043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.908069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:09 crc kubenswrapper[4832]: I1204 06:10:09.908087 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:09Z","lastTransitionTime":"2025-12-04T06:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.010776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.010843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.010866 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.010895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.010922 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:10Z","lastTransitionTime":"2025-12-04T06:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.112971 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.113007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.113017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.113035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.113052 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:10Z","lastTransitionTime":"2025-12-04T06:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.215937 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.215985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.216000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.216024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.216038 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:10Z","lastTransitionTime":"2025-12-04T06:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.318290 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.318323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.318335 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.318350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.318359 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:10Z","lastTransitionTime":"2025-12-04T06:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.420656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.420682 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.420690 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.420704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.420712 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:10Z","lastTransitionTime":"2025-12-04T06:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.522798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.522832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.522841 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.522855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.522865 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:10Z","lastTransitionTime":"2025-12-04T06:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.624918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.624961 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.624976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.624999 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.625014 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:10Z","lastTransitionTime":"2025-12-04T06:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.710560 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.710561 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.710748 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:10 crc kubenswrapper[4832]: E1204 06:10:10.710819 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:10 crc kubenswrapper[4832]: E1204 06:10:10.710966 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:10 crc kubenswrapper[4832]: E1204 06:10:10.711007 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.723299 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.726770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.726790 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.726800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.726814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.726825 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:10Z","lastTransitionTime":"2025-12-04T06:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.829046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.829068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.829076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.829089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.829098 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:10Z","lastTransitionTime":"2025-12-04T06:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.931603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.931652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.931664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.931682 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:10 crc kubenswrapper[4832]: I1204 06:10:10.931694 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:10Z","lastTransitionTime":"2025-12-04T06:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.033708 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.033775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.033786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.033800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.033810 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:11Z","lastTransitionTime":"2025-12-04T06:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.136534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.136965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.136977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.136998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.137012 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:11Z","lastTransitionTime":"2025-12-04T06:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.240314 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.240360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.240376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.240439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.240464 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:11Z","lastTransitionTime":"2025-12-04T06:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.343344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.343411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.343423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.343437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.343447 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:11Z","lastTransitionTime":"2025-12-04T06:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.445612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.445657 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.445668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.445685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.445697 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:11Z","lastTransitionTime":"2025-12-04T06:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.548632 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.548687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.548703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.548721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.548734 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:11Z","lastTransitionTime":"2025-12-04T06:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.652052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.652115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.652128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.652149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.652163 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:11Z","lastTransitionTime":"2025-12-04T06:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.710021 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:11 crc kubenswrapper[4832]: E1204 06:10:11.710158 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.754216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.754305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.754331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.754363 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.754384 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:11Z","lastTransitionTime":"2025-12-04T06:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.858323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.858412 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.858433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.858460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.858484 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:11Z","lastTransitionTime":"2025-12-04T06:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.961648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.961705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.961715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.961734 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:11 crc kubenswrapper[4832]: I1204 06:10:11.961750 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:11Z","lastTransitionTime":"2025-12-04T06:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.062744 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nl9n_325cffd3-4d6a-4916-8ad9-743cdc486769/kube-multus/0.log" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.062878 4832 generic.go:334] "Generic (PLEG): container finished" podID="325cffd3-4d6a-4916-8ad9-743cdc486769" containerID="145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf" exitCode=1 Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.062959 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nl9n" event={"ID":"325cffd3-4d6a-4916-8ad9-743cdc486769","Type":"ContainerDied","Data":"145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf"} Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.063707 4832 scope.go:117] "RemoveContainer" containerID="145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.064331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.064377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.064405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.064425 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.064436 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:12Z","lastTransitionTime":"2025-12-04T06:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.081265 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.107638 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:50Z\\\",\\\"message\\\":\\\"et-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00067647b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1204 06:09:50.499543 6467 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.121276 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.132917 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd51abf-8583-43b3-ac0e-750570e05aa1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7fc2ab450dc15d6e870ca441f100aedec9bbc8cf5085a4448eb361a2bd7971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd87caf81f133869c458e71c3c881af074e53afbb3b01e97fa3efd0002077c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9380c3f65d93675e7598bcaa6c7364057e34c7828e2898e46a03c5d0b309fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.142464 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.154790 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.165029 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.166420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.166460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.166472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.166489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.166501 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:12Z","lastTransitionTime":"2025-12-04T06:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.177279 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.188642 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.199285 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.208378 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.217956 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d575e62-4d68-4b13-97ee-ac69f6f3ed3a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b6ea0354dfc27fc4272b096cc92020bfbd087497902772eb0d352e62959ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.234036 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.247229 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:10:11Z\\\",\\\"message\\\":\\\"2025-12-04T06:09:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb\\\\n2025-12-04T06:09:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb to /host/opt/cni/bin/\\\\n2025-12-04T06:09:26Z [verbose] multus-daemon started\\\\n2025-12-04T06:09:26Z [verbose] Readiness Indicator file check\\\\n2025-12-04T06:10:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.256982 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.267945 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.268804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.268832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.268840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.268853 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.268865 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:12Z","lastTransitionTime":"2025-12-04T06:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.278469 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.287996 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:12Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.370751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.370784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.370793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.370808 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.370819 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:12Z","lastTransitionTime":"2025-12-04T06:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.473161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.473189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.473196 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.473208 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.473216 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:12Z","lastTransitionTime":"2025-12-04T06:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.575665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.575692 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.575700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.575714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.575722 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:12Z","lastTransitionTime":"2025-12-04T06:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.678297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.678581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.678649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.678714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.678771 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:12Z","lastTransitionTime":"2025-12-04T06:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.709988 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:12 crc kubenswrapper[4832]: E1204 06:10:12.710381 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.710134 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:12 crc kubenswrapper[4832]: E1204 06:10:12.710696 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.710080 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:12 crc kubenswrapper[4832]: E1204 06:10:12.710974 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.781637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.781700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.781716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.781741 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.781756 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:12Z","lastTransitionTime":"2025-12-04T06:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.884337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.884374 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.884385 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.884417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.884426 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:12Z","lastTransitionTime":"2025-12-04T06:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.986363 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.986423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.986432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.986446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:12 crc kubenswrapper[4832]: I1204 06:10:12.986456 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:12Z","lastTransitionTime":"2025-12-04T06:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.070009 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nl9n_325cffd3-4d6a-4916-8ad9-743cdc486769/kube-multus/0.log" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.070108 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nl9n" event={"ID":"325cffd3-4d6a-4916-8ad9-743cdc486769","Type":"ContainerStarted","Data":"cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8"} Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.088856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.088889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.088898 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.088910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.088919 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:13Z","lastTransitionTime":"2025-12-04T06:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.093558 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.106921 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.121124 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.136726 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.149712 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.163985 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d575e62-4d68-4b13-97ee-ac69f6f3ed3a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b6ea0354dfc27fc4272b096cc92020bfbd087497902772eb0d352e62959ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.183648 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.191676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.191720 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.191732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.191750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.191763 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:13Z","lastTransitionTime":"2025-12-04T06:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.204300 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.218549 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.231920 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.245370 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.256020 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.271548 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.284589 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:10:11Z\\\",\\\"message\\\":\\\"2025-12-04T06:09:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb\\\\n2025-12-04T06:09:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb to /host/opt/cni/bin/\\\\n2025-12-04T06:09:26Z [verbose] multus-daemon started\\\\n2025-12-04T06:09:26Z [verbose] Readiness Indicator file check\\\\n2025-12-04T06:10:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.294549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.294589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.294600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.294616 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.294627 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:13Z","lastTransitionTime":"2025-12-04T06:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.300934 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.312890 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd51abf-8583-43b3-ac0e-750570e05aa1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7fc2ab450dc15d6e870ca441f100aedec9bbc8cf5085a4448eb361a2bd7971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd87caf81f133869c458e71c3c881af074e53afbb3b01e97fa3efd0002077c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9380c3f65d93675e7598bcaa6c7364057e34c7828e2898e46a03c5d0b309fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.326986 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.349111 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:50Z\\\",\\\"message\\\":\\\"et-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00067647b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1204 06:09:50.499543 6467 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:13Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.396752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.396801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.396814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.396834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.396846 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:13Z","lastTransitionTime":"2025-12-04T06:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.498998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.499222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.499287 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.499508 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.499600 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:13Z","lastTransitionTime":"2025-12-04T06:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.602438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.602702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.602775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.602868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.602950 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:13Z","lastTransitionTime":"2025-12-04T06:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.704814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.704848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.704857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.704871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.704879 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:13Z","lastTransitionTime":"2025-12-04T06:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.710269 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:13 crc kubenswrapper[4832]: E1204 06:10:13.710429 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.807373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.807428 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.807437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.807452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.807462 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:13Z","lastTransitionTime":"2025-12-04T06:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.909919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.910027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.910044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.910073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:13 crc kubenswrapper[4832]: I1204 06:10:13.910091 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:13Z","lastTransitionTime":"2025-12-04T06:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.012015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.012058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.012069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.012084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.012094 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:14Z","lastTransitionTime":"2025-12-04T06:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.114511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.114564 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.114577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.114596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.114609 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:14Z","lastTransitionTime":"2025-12-04T06:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.217207 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.217245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.217257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.217273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.217288 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:14Z","lastTransitionTime":"2025-12-04T06:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.320025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.320073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.320086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.320108 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.320124 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:14Z","lastTransitionTime":"2025-12-04T06:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.421982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.422064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.422078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.422095 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.422106 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:14Z","lastTransitionTime":"2025-12-04T06:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.524198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.524267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.524283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.524301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.524312 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:14Z","lastTransitionTime":"2025-12-04T06:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.626586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.626636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.626649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.626665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.626676 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:14Z","lastTransitionTime":"2025-12-04T06:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.710290 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.710338 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.710341 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:14 crc kubenswrapper[4832]: E1204 06:10:14.710491 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:14 crc kubenswrapper[4832]: E1204 06:10:14.710873 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:14 crc kubenswrapper[4832]: E1204 06:10:14.710933 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.711884 4832 scope.go:117] "RemoveContainer" containerID="1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.723796 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d575e62-4d68-4b13-97ee-ac69f6f3ed3a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b6ea0354dfc27fc4272b096cc92020bfbd087497902772eb0d352e62959ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.734754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.734791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.734801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.734818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.734830 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:14Z","lastTransitionTime":"2025-12-04T06:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.739716 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.752886 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.762761 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.772432 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.784107 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.795502 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.810946 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.823760 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:10:11Z\\\",\\\"message\\\":\\\"2025-12-04T06:09:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb\\\\n2025-12-04T06:09:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb to /host/opt/cni/bin/\\\\n2025-12-04T06:09:26Z [verbose] multus-daemon started\\\\n2025-12-04T06:09:26Z [verbose] Readiness Indicator file check\\\\n2025-12-04T06:10:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.833316 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.837369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.837427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.837437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.837453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.837463 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:14Z","lastTransitionTime":"2025-12-04T06:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.844156 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.856726 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.868865 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd51abf-8583-43b3-ac0e-750570e05aa1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7fc2ab450dc15d6e870ca441f100aedec9bbc8cf5085a4448eb361a2bd7971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd87caf81f133869c458e71c3c881af074e53afbb3b01e97fa3efd0002077c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9380c3f65d93675e7598bcaa6c7364057e34c7828e2898e46a03c5d0b309fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.879295 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.894434 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:50Z\\\",\\\"message\\\":\\\"et-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00067647b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1204 06:09:50.499543 6467 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.908216 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.919524 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.929258 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:14Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.940931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.940957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.940965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.940978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:14 crc kubenswrapper[4832]: I1204 06:10:14.940986 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:14Z","lastTransitionTime":"2025-12-04T06:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.043445 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.043481 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.043494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.043509 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.043518 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:15Z","lastTransitionTime":"2025-12-04T06:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.077980 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/2.log" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.080924 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159"} Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.082008 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.102093 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d575e62-4d68-4b13-97ee-ac69f6f3ed3a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b6ea0354dfc27fc4272b096cc92020bfbd087497902772eb0d352e62959ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.115493 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.124992 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.135430 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.145124 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.146239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.146273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.146284 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.146301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.146313 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:15Z","lastTransitionTime":"2025-12-04T06:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.157453 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.167368 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.180921 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.194265 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:10:11Z\\\",\\\"message\\\":\\\"2025-12-04T06:09:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb\\\\n2025-12-04T06:09:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb to /host/opt/cni/bin/\\\\n2025-12-04T06:09:26Z [verbose] multus-daemon started\\\\n2025-12-04T06:09:26Z [verbose] Readiness Indicator file check\\\\n2025-12-04T06:10:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.203349 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.212808 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.226262 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.236628 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd51abf-8583-43b3-ac0e-750570e05aa1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7fc2ab450dc15d6e870ca441f100aedec9bbc8cf5085a4448eb361a2bd7971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd87caf81f133869c458e71c3c881af074e53afbb3b01e97fa3efd0002077c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9380c3f65d93675e7598bcaa6c7364057e34c7828e2898e46a03c5d0b309fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.248340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.248456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.248473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.248491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.248502 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:15Z","lastTransitionTime":"2025-12-04T06:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.249824 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.271252 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:50Z\\\",\\\"message\\\":\\\"et-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00067647b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1204 06:09:50.499543 6467 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.284524 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.298799 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.310119 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:15Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.351251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.351288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.351297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.351313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.351324 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:15Z","lastTransitionTime":"2025-12-04T06:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.453422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.453459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.453469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.453488 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.453528 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:15Z","lastTransitionTime":"2025-12-04T06:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.556707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.556773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.556794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.556815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.556833 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:15Z","lastTransitionTime":"2025-12-04T06:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.660436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.660498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.660512 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.660530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.660543 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:15Z","lastTransitionTime":"2025-12-04T06:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.709825 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:15 crc kubenswrapper[4832]: E1204 06:10:15.709971 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.763187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.763222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.763233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.763248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.763259 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:15Z","lastTransitionTime":"2025-12-04T06:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.865406 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.865466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.865482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.865499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.865510 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:15Z","lastTransitionTime":"2025-12-04T06:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.968544 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.968588 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.968602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.968622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:15 crc kubenswrapper[4832]: I1204 06:10:15.968633 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:15Z","lastTransitionTime":"2025-12-04T06:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.072619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.072665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.072674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.072697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.072708 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:16Z","lastTransitionTime":"2025-12-04T06:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.087961 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/3.log" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.088745 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/2.log" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.093323 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" exitCode=1 Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.093383 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159"} Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.093466 4832 scope.go:117] "RemoveContainer" containerID="1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.098383 4832 scope.go:117] "RemoveContainer" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" Dec 04 06:10:16 crc kubenswrapper[4832]: E1204 06:10:16.106713 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.118000 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d575e62-4d68-4b13-97ee-ac69f6f3ed3a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b6ea0354dfc27fc4272b096cc92020bfbd087497902772eb0d352e62959ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.141239 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.154582 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.166201 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.177434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.177471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.177482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.177499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.177511 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:16Z","lastTransitionTime":"2025-12-04T06:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.178545 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.193071 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.206220 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.222697 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.238011 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:10:11Z\\\",\\\"message\\\":\\\"2025-12-04T06:09:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb\\\\n2025-12-04T06:09:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb to /host/opt/cni/bin/\\\\n2025-12-04T06:09:26Z [verbose] multus-daemon started\\\\n2025-12-04T06:09:26Z [verbose] Readiness Indicator file check\\\\n2025-12-04T06:10:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.249248 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.261279 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.272060 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.279565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.279607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.279618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.279636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.279647 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:16Z","lastTransitionTime":"2025-12-04T06:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.281477 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd51abf-8583-43b3-ac0e-750570e05aa1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7fc2ab450dc15d6e870ca441f100aedec9bbc8cf5085a4448eb361a2bd7971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd87caf81f133869c458e71c3c881af074e53afbb3b01e97fa3efd0002077c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9380c3f65d93675e7598bcaa6c7364057e34c7828e2898e46a03c5d0b309fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.292377 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.308249 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0d871f1551106e88bfa648cfde853e93dac1589ca3b148d8c9c8e505af7cae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:09:50Z\\\",\\\"message\\\":\\\"et-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00067647b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1204 06:09:50.499543 6467 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:10:15Z\\\",\\\"message\\\":\\\"4 06:10:15.496952 6804 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1204 06:10:15.496954 6804 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-zdmhj in node crc\\\\nI1204 06:10:15.496958 6804 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1204 06:10:15.496978 6804 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1204 06:10:15.496931 6804 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF1204 06:10:15.496856 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:10:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.320347 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.330225 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.339054 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:16Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.381875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.381928 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.381939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.381958 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.381970 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:16Z","lastTransitionTime":"2025-12-04T06:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.484222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.484262 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.484271 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.484285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.484294 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:16Z","lastTransitionTime":"2025-12-04T06:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.587175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.587232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.587244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.587263 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.587276 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:16Z","lastTransitionTime":"2025-12-04T06:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.691128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.691200 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.691227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.691264 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.691292 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:16Z","lastTransitionTime":"2025-12-04T06:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.709905 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.709969 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.709969 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:16 crc kubenswrapper[4832]: E1204 06:10:16.710207 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:16 crc kubenswrapper[4832]: E1204 06:10:16.710958 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:16 crc kubenswrapper[4832]: E1204 06:10:16.711648 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.793274 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.793304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.793312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.793325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.793333 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:16Z","lastTransitionTime":"2025-12-04T06:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.897021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.897066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.897076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.897094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:16 crc kubenswrapper[4832]: I1204 06:10:16.897105 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:16Z","lastTransitionTime":"2025-12-04T06:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.000042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.000255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.000280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.000309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.000328 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.103511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.103573 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.103586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.103607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.103622 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.104698 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/3.log" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.110982 4832 scope.go:117] "RemoveContainer" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" Dec 04 06:10:17 crc kubenswrapper[4832]: E1204 06:10:17.111204 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.130561 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.145781 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd51abf-8583-43b3-ac0e-750570e05aa1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7fc2ab450dc15d6e870ca441f100aedec9bbc8cf5085a4448eb361a2bd7971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd87caf81f133869c458e71c3c881af074e53afbb3b01e97fa3efd0002077c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9380c3f65d93675e7598bcaa6c7364057e34c7828e2898e46a03c5d0b309fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.161732 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.180303 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:10:15Z\\\",\\\"message\\\":\\\"4 06:10:15.496952 6804 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1204 06:10:15.496954 6804 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-zdmhj in node crc\\\\nI1204 06:10:15.496958 6804 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1204 06:10:15.496978 6804 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1204 06:10:15.496931 6804 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF1204 06:10:15.496856 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:10:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.196766 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.207285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.207330 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.207343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.207362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.207373 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.211281 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.221997 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.233365 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d575e62-4d68-4b13-97ee-ac69f6f3ed3a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b6ea0354dfc27fc4272b096cc92020bfbd087497902772eb0d352e62959ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.249043 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.266419 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.279252 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.289592 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.309650 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.310015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.310041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.310051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.310066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.310077 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.322727 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.345798 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.359525 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:10:11Z\\\",\\\"message\\\":\\\"2025-12-04T06:09:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb\\\\n2025-12-04T06:09:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb to /host/opt/cni/bin/\\\\n2025-12-04T06:09:26Z [verbose] multus-daemon started\\\\n2025-12-04T06:09:26Z [verbose] Readiness Indicator file check\\\\n2025-12-04T06:10:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.372071 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.385257 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.413241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.413323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.413335 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.413357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.413378 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.518106 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.518705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.518923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.519115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.519499 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.623684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.623746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.623767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.623794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.623814 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.682833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.682903 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.682914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.682934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.682947 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: E1204 06:10:17.702080 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.707343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.707580 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.707670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.707986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.708079 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.709468 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:17 crc kubenswrapper[4832]: E1204 06:10:17.709593 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:17 crc kubenswrapper[4832]: E1204 06:10:17.729105 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.733771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.733882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.733956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.734052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.734128 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: E1204 06:10:17.747452 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.751273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.751325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.751339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.751359 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.751371 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: E1204 06:10:17.765186 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.769437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.769495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.769508 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.769530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.769545 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: E1204 06:10:17.781187 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:17Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:17 crc kubenswrapper[4832]: E1204 06:10:17.781482 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.783545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.783610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.783629 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.783658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.783680 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.886946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.886988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.887000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.887027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.887040 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.990615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.990660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.990672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.990693 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:17 crc kubenswrapper[4832]: I1204 06:10:17.990706 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:17Z","lastTransitionTime":"2025-12-04T06:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.095054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.095139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.095166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.095293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.095356 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:18Z","lastTransitionTime":"2025-12-04T06:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.203791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.203830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.203839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.203855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.203865 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:18Z","lastTransitionTime":"2025-12-04T06:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.306894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.306962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.306977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.306999 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.307015 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:18Z","lastTransitionTime":"2025-12-04T06:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.410342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.410432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.410442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.410464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.410477 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:18Z","lastTransitionTime":"2025-12-04T06:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.513428 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.513485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.513499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.513519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.513575 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:18Z","lastTransitionTime":"2025-12-04T06:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.617474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.617543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.617562 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.617590 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.617610 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:18Z","lastTransitionTime":"2025-12-04T06:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.709803 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.709911 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.709803 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:18 crc kubenswrapper[4832]: E1204 06:10:18.709976 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:18 crc kubenswrapper[4832]: E1204 06:10:18.710091 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:18 crc kubenswrapper[4832]: E1204 06:10:18.710302 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.720707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.720754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.720772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.720795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.720813 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:18Z","lastTransitionTime":"2025-12-04T06:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.824458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.824535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.824550 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.824570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.824586 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:18Z","lastTransitionTime":"2025-12-04T06:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.929032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.929128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.929159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.929210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:18 crc kubenswrapper[4832]: I1204 06:10:18.929238 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:18Z","lastTransitionTime":"2025-12-04T06:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.031687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.031721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.031731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.031745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.031753 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:19Z","lastTransitionTime":"2025-12-04T06:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.133641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.133689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.133700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.133717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.133729 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:19Z","lastTransitionTime":"2025-12-04T06:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.235616 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.235660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.235668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.235683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.235692 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:19Z","lastTransitionTime":"2025-12-04T06:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.337752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.337803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.337818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.337838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.337852 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:19Z","lastTransitionTime":"2025-12-04T06:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.440245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.440314 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.440339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.440372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.440429 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:19Z","lastTransitionTime":"2025-12-04T06:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.543129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.543220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.543268 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.543291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.543306 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:19Z","lastTransitionTime":"2025-12-04T06:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.645799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.645854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.645869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.645893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.645907 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:19Z","lastTransitionTime":"2025-12-04T06:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.709924 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:19 crc kubenswrapper[4832]: E1204 06:10:19.710086 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.747778 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.747830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.747841 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.747858 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.747871 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:19Z","lastTransitionTime":"2025-12-04T06:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.850371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.850432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.850446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.850460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.850470 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:19Z","lastTransitionTime":"2025-12-04T06:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.952587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.952644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.952660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.952675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:19 crc kubenswrapper[4832]: I1204 06:10:19.952684 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:19Z","lastTransitionTime":"2025-12-04T06:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.054982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.055039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.055053 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.055074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.055090 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:20Z","lastTransitionTime":"2025-12-04T06:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.157466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.157515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.157526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.157543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.157554 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:20Z","lastTransitionTime":"2025-12-04T06:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.260333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.260378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.260410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.260430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.260441 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:20Z","lastTransitionTime":"2025-12-04T06:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.363059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.363104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.363119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.363138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.363151 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:20Z","lastTransitionTime":"2025-12-04T06:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.465374 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.465444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.465460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.465480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.465494 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:20Z","lastTransitionTime":"2025-12-04T06:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.568068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.568155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.568183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.568214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.568254 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:20Z","lastTransitionTime":"2025-12-04T06:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.670502 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.670571 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.670593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.670622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.670645 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:20Z","lastTransitionTime":"2025-12-04T06:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.710242 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.710323 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.710278 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:20 crc kubenswrapper[4832]: E1204 06:10:20.710491 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:20 crc kubenswrapper[4832]: E1204 06:10:20.710654 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:20 crc kubenswrapper[4832]: E1204 06:10:20.710764 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.773894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.773951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.773976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.774006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.774026 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:20Z","lastTransitionTime":"2025-12-04T06:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.877323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.877472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.877497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.877528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.877548 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:20Z","lastTransitionTime":"2025-12-04T06:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.981176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.981261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.981283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.981304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:20 crc kubenswrapper[4832]: I1204 06:10:20.981317 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:20Z","lastTransitionTime":"2025-12-04T06:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.084529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.084605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.084627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.084650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.084669 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:21Z","lastTransitionTime":"2025-12-04T06:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.187680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.187731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.187745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.187763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.187778 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:21Z","lastTransitionTime":"2025-12-04T06:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.291318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.291445 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.291484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.291518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.291544 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:21Z","lastTransitionTime":"2025-12-04T06:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.394076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.394123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.394134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.394150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.394161 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:21Z","lastTransitionTime":"2025-12-04T06:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.497082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.497160 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.497183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.497214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.497239 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:21Z","lastTransitionTime":"2025-12-04T06:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.599841 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.599919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.599941 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.599967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.599984 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:21Z","lastTransitionTime":"2025-12-04T06:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.702624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.702661 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.702672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.702691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.702701 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:21Z","lastTransitionTime":"2025-12-04T06:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.709998 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:21 crc kubenswrapper[4832]: E1204 06:10:21.710193 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.805862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.805921 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.805940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.805964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.805982 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:21Z","lastTransitionTime":"2025-12-04T06:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.909229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.909296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.909315 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.909351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:21 crc kubenswrapper[4832]: I1204 06:10:21.909372 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:21Z","lastTransitionTime":"2025-12-04T06:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.012027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.012083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.012104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.012130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.012152 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:22Z","lastTransitionTime":"2025-12-04T06:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.114789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.114857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.114874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.114899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.114917 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:22Z","lastTransitionTime":"2025-12-04T06:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.217684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.217746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.217768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.217795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.217812 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:22Z","lastTransitionTime":"2025-12-04T06:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.320530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.320593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.320619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.320652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.320678 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:22Z","lastTransitionTime":"2025-12-04T06:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.423328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.423368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.423377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.423427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.423437 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:22Z","lastTransitionTime":"2025-12-04T06:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.526297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.526387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.526456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.526492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.526515 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:22Z","lastTransitionTime":"2025-12-04T06:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.630261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.630335 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.630349 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.630372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.630386 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:22Z","lastTransitionTime":"2025-12-04T06:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.709609 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.709610 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:22 crc kubenswrapper[4832]: E1204 06:10:22.709753 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:22 crc kubenswrapper[4832]: E1204 06:10:22.709830 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.709645 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:22 crc kubenswrapper[4832]: E1204 06:10:22.709928 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.732575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.732613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.732627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.732642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.732707 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:22Z","lastTransitionTime":"2025-12-04T06:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.835363 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.835445 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.835461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.835481 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.835500 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:22Z","lastTransitionTime":"2025-12-04T06:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.937319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.937361 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.937372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.937404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:22 crc kubenswrapper[4832]: I1204 06:10:22.937421 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:22Z","lastTransitionTime":"2025-12-04T06:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.039194 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.039464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.039486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.039507 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.039539 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:23Z","lastTransitionTime":"2025-12-04T06:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.144499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.144534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.144546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.144559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.144569 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:23Z","lastTransitionTime":"2025-12-04T06:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.247273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.247311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.247322 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.247339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.247349 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:23Z","lastTransitionTime":"2025-12-04T06:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.350477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.350525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.350536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.350551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.350599 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:23Z","lastTransitionTime":"2025-12-04T06:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.452658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.452695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.452703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.452718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.452727 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:23Z","lastTransitionTime":"2025-12-04T06:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.554896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.554967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.554985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.555010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.555030 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:23Z","lastTransitionTime":"2025-12-04T06:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.657272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.657308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.657319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.657335 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.657344 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:23Z","lastTransitionTime":"2025-12-04T06:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.710483 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:23 crc kubenswrapper[4832]: E1204 06:10:23.710870 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.759934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.759973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.759983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.760015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.760030 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:23Z","lastTransitionTime":"2025-12-04T06:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.862057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.862514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.862681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.862868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.863007 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:23Z","lastTransitionTime":"2025-12-04T06:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.965786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.966261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.966524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.966730 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:23 crc kubenswrapper[4832]: I1204 06:10:23.966871 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:23Z","lastTransitionTime":"2025-12-04T06:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.069794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.070201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.070353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.070499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.070624 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:24Z","lastTransitionTime":"2025-12-04T06:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.173161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.173210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.173222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.173240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.173251 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:24Z","lastTransitionTime":"2025-12-04T06:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.276041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.276076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.276086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.276102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.276112 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:24Z","lastTransitionTime":"2025-12-04T06:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.378587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.378635 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.378649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.378667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.378680 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:24Z","lastTransitionTime":"2025-12-04T06:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.480806 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.480863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.480872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.480888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.480897 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:24Z","lastTransitionTime":"2025-12-04T06:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.583202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.583236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.583245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.583259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.583267 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:24Z","lastTransitionTime":"2025-12-04T06:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.685418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.685458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.685471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.685487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.685501 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:24Z","lastTransitionTime":"2025-12-04T06:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.710557 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.710721 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:24 crc kubenswrapper[4832]: E1204 06:10:24.710958 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.711357 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:24 crc kubenswrapper[4832]: E1204 06:10:24.711572 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:24 crc kubenswrapper[4832]: E1204 06:10:24.711897 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.726664 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.739856 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:10:11Z\\\",\\\"message\\\":\\\"2025-12-04T06:09:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb\\\\n2025-12-04T06:09:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb to /host/opt/cni/bin/\\\\n2025-12-04T06:09:26Z [verbose] multus-daemon started\\\\n2025-12-04T06:09:26Z [verbose] Readiness Indicator file check\\\\n2025-12-04T06:10:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.751319 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.763803 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.775747 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.786328 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.789011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.789285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.789599 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.789877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.790642 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:24Z","lastTransitionTime":"2025-12-04T06:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.803536 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.833569 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:10:15Z\\\",\\\"message\\\":\\\"4 06:10:15.496952 6804 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1204 06:10:15.496954 6804 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-zdmhj in node crc\\\\nI1204 06:10:15.496958 6804 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1204 06:10:15.496978 6804 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1204 06:10:15.496931 6804 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF1204 06:10:15.496856 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:10:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.846335 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.856116 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd51abf-8583-43b3-ac0e-750570e05aa1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7fc2ab450dc15d6e870ca441f100aedec9bbc8cf5085a4448eb361a2bd7971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd87caf81f133869c458e71c3c881af074e53afbb3b01e97fa3efd0002077c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9380c3f65d93675e7598bcaa6c7364057e34c7828e2898e46a03c5d0b309fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.865597 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.876691 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.886142 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.893684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.893738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.893750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.893767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.893778 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:24Z","lastTransitionTime":"2025-12-04T06:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.897819 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.907651 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.919592 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.927445 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.935204 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d575e62-4d68-4b13-97ee-ac69f6f3ed3a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b6ea0354dfc27fc4272b096cc92020bfbd087497902772eb0d352e62959ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:24Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.995720 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.995776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.995795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.995818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:24 crc kubenswrapper[4832]: I1204 06:10:24.995834 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:24Z","lastTransitionTime":"2025-12-04T06:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.099187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.099258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.099293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.099321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.099340 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:25Z","lastTransitionTime":"2025-12-04T06:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.201911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.201953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.201964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.201980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.201992 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:25Z","lastTransitionTime":"2025-12-04T06:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.305232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.305278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.305287 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.305305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.305314 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:25Z","lastTransitionTime":"2025-12-04T06:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.408383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.408525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.408564 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.408596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.408617 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:25Z","lastTransitionTime":"2025-12-04T06:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.511435 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.511468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.511476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.511491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.511500 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:25Z","lastTransitionTime":"2025-12-04T06:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.615076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.615158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.615177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.615199 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.615310 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:25Z","lastTransitionTime":"2025-12-04T06:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.710463 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:25 crc kubenswrapper[4832]: E1204 06:10:25.710601 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.717323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.717438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.717452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.717485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.717493 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:25Z","lastTransitionTime":"2025-12-04T06:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.819894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.819926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.819936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.819950 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.819959 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:25Z","lastTransitionTime":"2025-12-04T06:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.923330 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.923374 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.923384 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.923420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:25 crc kubenswrapper[4832]: I1204 06:10:25.923431 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:25Z","lastTransitionTime":"2025-12-04T06:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.025693 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.025742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.025752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.025772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.025783 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:26Z","lastTransitionTime":"2025-12-04T06:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.127972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.128022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.128035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.128051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.128061 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:26Z","lastTransitionTime":"2025-12-04T06:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.231601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.231645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.231655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.231674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.231685 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:26Z","lastTransitionTime":"2025-12-04T06:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.334178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.334781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.334875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.334974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.335071 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:26Z","lastTransitionTime":"2025-12-04T06:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.437245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.437512 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.437587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.437658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.437734 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:26Z","lastTransitionTime":"2025-12-04T06:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.505083 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.505301 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.505503 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.505664 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:30.505641904 +0000 UTC m=+146.118459620 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.505773 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:30.505761397 +0000 UTC m=+146.118579103 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.540911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.540959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.540971 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.540989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.541000 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:26Z","lastTransitionTime":"2025-12-04T06:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.606240 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.606375 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.606493 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:30.60647347 +0000 UTC m=+146.219291176 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.643814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.643861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.643871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.643888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.643898 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:26Z","lastTransitionTime":"2025-12-04T06:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.707592 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.707636 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.707754 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.707754 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.707768 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.707776 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.707781 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.707785 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.707829 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:30.707816268 +0000 UTC m=+146.320633974 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.707863 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:30.707837209 +0000 UTC m=+146.320654915 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.709518 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.709518 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.709568 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.709761 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.709825 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:26 crc kubenswrapper[4832]: E1204 06:10:26.709903 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.746885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.746919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.746927 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.746942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.746951 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:26Z","lastTransitionTime":"2025-12-04T06:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.849236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.849295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.849309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.849328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.849340 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:26Z","lastTransitionTime":"2025-12-04T06:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.951990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.952032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.952043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.952058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:26 crc kubenswrapper[4832]: I1204 06:10:26.952069 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:26Z","lastTransitionTime":"2025-12-04T06:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.054810 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.054848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.054865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.054891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.054903 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:27Z","lastTransitionTime":"2025-12-04T06:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.157073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.157112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.157121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.157137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.157146 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:27Z","lastTransitionTime":"2025-12-04T06:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.259959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.260294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.260305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.260323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.260332 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:27Z","lastTransitionTime":"2025-12-04T06:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.363700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.363778 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.363805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.363845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.363870 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:27Z","lastTransitionTime":"2025-12-04T06:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.466076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.466129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.466153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.466171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.466183 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:27Z","lastTransitionTime":"2025-12-04T06:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.568496 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.568532 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.568543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.568558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.568569 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:27Z","lastTransitionTime":"2025-12-04T06:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.670032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.670061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.670069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.670085 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.670094 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:27Z","lastTransitionTime":"2025-12-04T06:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.709696 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:27 crc kubenswrapper[4832]: E1204 06:10:27.709840 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.771844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.771889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.771901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.771920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.771932 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:27Z","lastTransitionTime":"2025-12-04T06:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.874326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.874411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.874422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.874439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.874448 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:27Z","lastTransitionTime":"2025-12-04T06:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.977048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.977101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.977117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.977138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:27 crc kubenswrapper[4832]: I1204 06:10:27.977154 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:27Z","lastTransitionTime":"2025-12-04T06:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.080007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.080065 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.080076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.080093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.080103 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.087499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.087550 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.087560 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.087576 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.087587 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: E1204 06:10:28.098407 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.101168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.101212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.101223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.101239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.101250 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: E1204 06:10:28.111504 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.114240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.114277 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.114289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.114305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.114316 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: E1204 06:10:28.124634 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.127322 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.127362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.127373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.127402 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.127417 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: E1204 06:10:28.140890 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.143607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.143636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.143646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.143661 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.143670 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: E1204 06:10:28.154419 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"897682a6-bffb-4874-9d5a-2be09a040e0d\\\",\\\"systemUUID\\\":\\\"a88f56e0-14a4-42ae-9cb0-d2faa7a8aa13\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:28Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:28 crc kubenswrapper[4832]: E1204 06:10:28.154530 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.183110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.183158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.183169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.183183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.183192 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.285157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.285193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.285207 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.285221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.285230 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.387645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.387692 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.387705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.387722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.387732 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.490170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.490223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.490243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.490257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.490267 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.592862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.592904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.592920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.592940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.592954 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.694912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.694954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.694964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.694983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.694995 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.709974 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:28 crc kubenswrapper[4832]: E1204 06:10:28.710140 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.710458 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:28 crc kubenswrapper[4832]: E1204 06:10:28.710565 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.710676 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:28 crc kubenswrapper[4832]: E1204 06:10:28.710773 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.711820 4832 scope.go:117] "RemoveContainer" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" Dec 04 06:10:28 crc kubenswrapper[4832]: E1204 06:10:28.712099 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.797811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.797890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.797904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.797923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.797937 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.900518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.900563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.900574 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.900589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:28 crc kubenswrapper[4832]: I1204 06:10:28.900601 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:28Z","lastTransitionTime":"2025-12-04T06:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.002707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.002746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.002755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.002769 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.002779 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:29Z","lastTransitionTime":"2025-12-04T06:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.105340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.105381 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.105438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.105457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.105469 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:29Z","lastTransitionTime":"2025-12-04T06:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.207787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.207832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.207843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.207864 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.207877 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:29Z","lastTransitionTime":"2025-12-04T06:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.310418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.310460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.310477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.310494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.310506 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:29Z","lastTransitionTime":"2025-12-04T06:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.413285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.413323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.413333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.413348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.413357 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:29Z","lastTransitionTime":"2025-12-04T06:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.516732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.516854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.516922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.516956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.516983 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:29Z","lastTransitionTime":"2025-12-04T06:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.619463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.619511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.619523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.619542 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.619556 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:29Z","lastTransitionTime":"2025-12-04T06:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.710594 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:29 crc kubenswrapper[4832]: E1204 06:10:29.710752 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.722530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.722586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.722596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.722611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.722620 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:29Z","lastTransitionTime":"2025-12-04T06:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.824519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.824559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.824568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.824581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.824641 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:29Z","lastTransitionTime":"2025-12-04T06:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.927046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.927140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.927160 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.927185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:29 crc kubenswrapper[4832]: I1204 06:10:29.927202 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:29Z","lastTransitionTime":"2025-12-04T06:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.029251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.029290 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.029300 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.029315 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.029325 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:30Z","lastTransitionTime":"2025-12-04T06:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.131847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.131895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.131910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.131928 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.131940 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:30Z","lastTransitionTime":"2025-12-04T06:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.234551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.234601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.234610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.234624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.234632 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:30Z","lastTransitionTime":"2025-12-04T06:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.337674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.337747 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.337758 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.337771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.337781 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:30Z","lastTransitionTime":"2025-12-04T06:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.440210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.440775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.440849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.440920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.440979 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:30Z","lastTransitionTime":"2025-12-04T06:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.548587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.548625 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.548635 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.548651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.548662 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:30Z","lastTransitionTime":"2025-12-04T06:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.651903 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.651955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.651971 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.652011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.652029 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:30Z","lastTransitionTime":"2025-12-04T06:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.709934 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.710089 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.710178 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:30 crc kubenswrapper[4832]: E1204 06:10:30.710875 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:30 crc kubenswrapper[4832]: E1204 06:10:30.711042 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:30 crc kubenswrapper[4832]: E1204 06:10:30.711177 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.754844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.754906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.754931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.754961 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.754982 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:30Z","lastTransitionTime":"2025-12-04T06:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.857989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.858308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.858520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.858612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.858680 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:30Z","lastTransitionTime":"2025-12-04T06:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.961639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.961900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.961981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.962104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:30 crc kubenswrapper[4832]: I1204 06:10:30.962189 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:30Z","lastTransitionTime":"2025-12-04T06:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.065119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.065167 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.065176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.065188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.065197 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:31Z","lastTransitionTime":"2025-12-04T06:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.167770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.167838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.167850 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.167890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.167903 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:31Z","lastTransitionTime":"2025-12-04T06:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.271514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.271556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.271566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.271582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.271591 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:31Z","lastTransitionTime":"2025-12-04T06:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.374233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.374294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.374309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.374326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.374338 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:31Z","lastTransitionTime":"2025-12-04T06:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.476871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.476923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.476940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.476965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.476982 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:31Z","lastTransitionTime":"2025-12-04T06:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.579532 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.579617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.579645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.579670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.579688 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:31Z","lastTransitionTime":"2025-12-04T06:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.682423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.682467 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.682481 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.682501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.682517 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:31Z","lastTransitionTime":"2025-12-04T06:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.710589 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:31 crc kubenswrapper[4832]: E1204 06:10:31.710809 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.785723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.785805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.785831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.785859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.785880 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:31Z","lastTransitionTime":"2025-12-04T06:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.888373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.888480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.888503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.888531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.888552 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:31Z","lastTransitionTime":"2025-12-04T06:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.991314 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.991354 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.991366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.991382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:31 crc kubenswrapper[4832]: I1204 06:10:31.991414 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:31Z","lastTransitionTime":"2025-12-04T06:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.095017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.095077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.095095 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.095116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.095138 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:32Z","lastTransitionTime":"2025-12-04T06:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.198194 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.198258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.198273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.198294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.198310 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:32Z","lastTransitionTime":"2025-12-04T06:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.299712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.299746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.299754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.299767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.299775 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:32Z","lastTransitionTime":"2025-12-04T06:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.402275 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.402312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.402322 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.402337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.402347 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:32Z","lastTransitionTime":"2025-12-04T06:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.507094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.507143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.507153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.507169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.507185 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:32Z","lastTransitionTime":"2025-12-04T06:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.609519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.609587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.609597 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.609612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.609622 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:32Z","lastTransitionTime":"2025-12-04T06:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.709995 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.710119 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.710214 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:32 crc kubenswrapper[4832]: E1204 06:10:32.710118 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:32 crc kubenswrapper[4832]: E1204 06:10:32.710472 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:32 crc kubenswrapper[4832]: E1204 06:10:32.710541 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.711600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.711637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.711646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.711663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.711673 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:32Z","lastTransitionTime":"2025-12-04T06:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.814149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.814199 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.814210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.814226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.814239 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:32Z","lastTransitionTime":"2025-12-04T06:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.916367 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.916430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.916443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.916461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:32 crc kubenswrapper[4832]: I1204 06:10:32.916473 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:32Z","lastTransitionTime":"2025-12-04T06:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.018910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.018947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.018958 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.018971 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.018979 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:33Z","lastTransitionTime":"2025-12-04T06:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.121697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.121756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.121772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.121800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.121818 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:33Z","lastTransitionTime":"2025-12-04T06:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.224306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.224354 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.224365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.224386 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.224418 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:33Z","lastTransitionTime":"2025-12-04T06:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.326420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.326465 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.326474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.326490 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.326499 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:33Z","lastTransitionTime":"2025-12-04T06:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.428775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.428820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.428829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.428844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.428855 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:33Z","lastTransitionTime":"2025-12-04T06:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.531565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.531608 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.531618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.531636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.531648 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:33Z","lastTransitionTime":"2025-12-04T06:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.634478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.634538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.634553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.634574 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.634590 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:33Z","lastTransitionTime":"2025-12-04T06:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.710803 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:33 crc kubenswrapper[4832]: E1204 06:10:33.710960 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.737634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.737677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.737688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.737706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.737717 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:33Z","lastTransitionTime":"2025-12-04T06:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.840600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.840670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.840681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.840695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.840705 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:33Z","lastTransitionTime":"2025-12-04T06:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.943176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.943213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.943222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.943238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:33 crc kubenswrapper[4832]: I1204 06:10:33.943247 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:33Z","lastTransitionTime":"2025-12-04T06:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.046185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.046253 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.046271 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.046294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.046310 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:34Z","lastTransitionTime":"2025-12-04T06:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.149262 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.149326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.149346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.149373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.149480 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:34Z","lastTransitionTime":"2025-12-04T06:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.251017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.251110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.251129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.251151 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.251165 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:34Z","lastTransitionTime":"2025-12-04T06:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.353651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.353700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.353711 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.353728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.353739 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:34Z","lastTransitionTime":"2025-12-04T06:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.456293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.456337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.456351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.456372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.456385 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:34Z","lastTransitionTime":"2025-12-04T06:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.559048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.559152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.559180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.559206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.559226 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:34Z","lastTransitionTime":"2025-12-04T06:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.662130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.662230 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.662255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.662289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.662310 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:34Z","lastTransitionTime":"2025-12-04T06:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.710320 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.710469 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:34 crc kubenswrapper[4832]: E1204 06:10:34.710614 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.710675 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:34 crc kubenswrapper[4832]: E1204 06:10:34.710830 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:34 crc kubenswrapper[4832]: E1204 06:10:34.710900 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.732919 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e942db0-ad02-44d3-ae6e-65fa43b714e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958bbc395592cd9d31b640fb78a3d31eedc0ea6201e6a6b959a40f9255667119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d44e5151e84e5ddcfdf4c55843d052e776a247bcddf3ccf9cade60d0139662b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43247fdce316e6bf2bf3f8d2cfa1a5f3def5407787370900f9180ea4360ba0a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.752218 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdd51abf-8583-43b3-ac0e-750570e05aa1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7fc2ab450dc15d6e870ca441f100aedec9bbc8cf5085a4448eb361a2bd7971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd87caf81f133869c458e71c3c881af074e53afbb3b01e97fa3efd0002077c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9380c3f65d93675e7598bcaa6c7364057e34c7828e2898e46a03c5d0b309fddf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29773996afe215d40108199a0038fb947a1989c3221207d7afecaca07485b6d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.766132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.766185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.766201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.766221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.766237 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:34Z","lastTransitionTime":"2025-12-04T06:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.775860 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6029efe909b397e804a210564aba26d9da874a1da8005cc028889aab02908955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88570e79800500d6d96c4a7e842524d16b987035c23f61aca9b3ffc232706f55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.800558 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c442d280-de5c-4240-90b3-af48bbb2f1c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:10:15Z\\\",\\\"message\\\":\\\"4 06:10:15.496952 6804 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1204 06:10:15.496954 6804 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-zdmhj in node crc\\\\nI1204 06:10:15.496958 6804 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1204 06:10:15.496978 6804 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1204 06:10:15.496931 6804 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF1204 06:10:15.496856 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:10:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cwds7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zdmhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.817175 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac236b1646d98089fd38aa37923f55c8f090801c5300ad06958e8bd2aad17e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.835986 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.845464 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-97mnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bc4584c-cbf3-472e-ab0e-1ada32291529\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edc339fcb5696a3beab4080d7b0dcf90eb68326791e53dd133e2de71f63ae425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zzrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-97mnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.856090 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ab4745-26f8-4cb8-a4c4-c3064251922e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqt29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ctzsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.865870 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d575e62-4d68-4b13-97ee-ac69f6f3ed3a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://046b6ea0354dfc27fc4272b096cc92020bfbd087497902772eb0d352e62959ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77acdf92f51f2a573be7598deee746c2641eb7ea7d499023df670932fa647891\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.868866 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.868916 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.868932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.868956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.868974 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:34Z","lastTransitionTime":"2025-12-04T06:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.882124 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9f33b2b-3ebe-4107-96a0-40d7892a597d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1204 06:09:17.089216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1204 06:09:17.090760 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1508239940/tls.crt::/tmp/serving-cert-1508239940/tls.key\\\\\\\"\\\\nI1204 06:09:22.407624 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1204 06:09:22.409855 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1204 06:09:22.409872 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1204 06:09:22.409889 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1204 06:09:22.409893 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1204 06:09:22.414084 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1204 06:09:22.414157 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1204 06:09:22.414147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1204 06:09:22.414218 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1204 06:09:22.414249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1204 06:09:22.414259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1204 06:09:22.414263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1204 06:09:22.414267 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1204 06:09:22.416537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.895637 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.909313 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd09c3445baaef4da95d07178c5def8c10bdebfc0bdd334223b2ea16035080db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.919456 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0d1459e-480d-42bf-bdc2-0f2c40a73eb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a105d464a7319c2a579120e6da13f8356a01a1214523b349835e066ded55de5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd04b029f4b96ae273fadf432d4efb030b2f777bbade3c44ba0c12650df0a0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrx7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ss7ls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.933772 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.944356 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4079cbc8-9860-412d-8bb8-37713e677d1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://671266c7dca9620c96b60234fb25bb288755484e418026a56f946040bff971f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hbj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jl6q4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.957408 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jg77n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"289c102f-5bf1-46ae-84a5-37ab6ced4618\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://118247cf70f9b8bc1bae25b818105d8dbba85eeea1ced877c8a5d77a9464afc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b2d5c42c82fdbae9df820e2b36f6caac1803ec359da9cd624ba56bfd4482c4b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d805b06caee60102f32401c21bb60fb7f22f22455f833b837086f1f41575c6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d20e81372d9190b13c403aebc518d49574710fb359c99f6ab77af741f22287c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71ae4e98862792feb052c8de83645275356d94b46a6feefa6378d4a301d8acb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc663b2813c0b5e4622f3f9452b692e84bb5938a9760588420f95521881436e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d481f02a75ba85072d53269091955db3eab44ce2550a63d6cd10465696785\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T06:09:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfstt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jg77n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.969094 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nl9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"325cffd3-4d6a-4916-8ad9-743cdc486769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:10:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T06:10:11Z\\\",\\\"message\\\":\\\"2025-12-04T06:09:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb\\\\n2025-12-04T06:09:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5a4c6c53-d9bf-4cce-8e49-82426caac8fb to /host/opt/cni/bin/\\\\n2025-12-04T06:09:26Z [verbose] multus-daemon started\\\\n2025-12-04T06:09:26Z [verbose] Readiness Indicator file check\\\\n2025-12-04T06:10:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T06:09:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8d2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nl9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.971138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.971183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.971197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.971216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.971227 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:34Z","lastTransitionTime":"2025-12-04T06:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:34 crc kubenswrapper[4832]: I1204 06:10:34.979561 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqplg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"546cfc29-fe8f-4952-999c-11f1f024aee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T06:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://589a7f69a05e896c41e7c7b1edcb7397dc1a9948450039d06007b151bd848b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T06:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d9nl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T06:09:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqplg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T06:10:34Z is after 2025-08-24T17:21:41Z" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.073056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.073093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.073101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.073113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.073122 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:35Z","lastTransitionTime":"2025-12-04T06:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.175240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.175292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.175308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.175331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.175347 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:35Z","lastTransitionTime":"2025-12-04T06:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.278013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.278058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.278069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.278089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.278102 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:35Z","lastTransitionTime":"2025-12-04T06:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.380704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.380766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.380804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.380839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.380863 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:35Z","lastTransitionTime":"2025-12-04T06:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.483721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.483765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.483777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.483793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.483806 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:35Z","lastTransitionTime":"2025-12-04T06:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.586745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.586797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.586818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.586846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.586864 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:35Z","lastTransitionTime":"2025-12-04T06:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.689905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.689950 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.689963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.689981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.689992 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:35Z","lastTransitionTime":"2025-12-04T06:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.710185 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:35 crc kubenswrapper[4832]: E1204 06:10:35.710351 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.791498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.791543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.791555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.791571 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.791582 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:35Z","lastTransitionTime":"2025-12-04T06:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.894206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.894243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.894251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.894264 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.894272 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:35Z","lastTransitionTime":"2025-12-04T06:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.996530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.996570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.996578 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.996594 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:35 crc kubenswrapper[4832]: I1204 06:10:35.996603 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:35Z","lastTransitionTime":"2025-12-04T06:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.099657 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.099707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.099719 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.099738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.099752 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:36Z","lastTransitionTime":"2025-12-04T06:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.202493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.202539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.202552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.202570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.202583 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:36Z","lastTransitionTime":"2025-12-04T06:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.305005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.305045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.305055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.305070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.305081 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:36Z","lastTransitionTime":"2025-12-04T06:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.408087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.408149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.408166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.408189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.408205 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:36Z","lastTransitionTime":"2025-12-04T06:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.510807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.510849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.510859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.510875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.510885 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:36Z","lastTransitionTime":"2025-12-04T06:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.613529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.613607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.613624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.613654 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.613681 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:36Z","lastTransitionTime":"2025-12-04T06:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.710718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.710892 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.711029 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:36 crc kubenswrapper[4832]: E1204 06:10:36.711176 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:36 crc kubenswrapper[4832]: E1204 06:10:36.711350 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:36 crc kubenswrapper[4832]: E1204 06:10:36.711564 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.717075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.717125 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.717162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.717178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.717188 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:36Z","lastTransitionTime":"2025-12-04T06:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.819054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.819108 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.819118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.819139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.819152 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:36Z","lastTransitionTime":"2025-12-04T06:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.921568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.921609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.921624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.921641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:36 crc kubenswrapper[4832]: I1204 06:10:36.921650 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:36Z","lastTransitionTime":"2025-12-04T06:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.024273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.024331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.024350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.024372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.024386 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:37Z","lastTransitionTime":"2025-12-04T06:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.126527 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.126566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.126577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.126592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.126603 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:37Z","lastTransitionTime":"2025-12-04T06:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.229370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.229440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.229456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.229475 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.229489 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:37Z","lastTransitionTime":"2025-12-04T06:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.332533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.332582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.332591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.332610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.332621 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:37Z","lastTransitionTime":"2025-12-04T06:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.435456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.435498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.435510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.435527 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.435540 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:37Z","lastTransitionTime":"2025-12-04T06:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.538463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.538535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.538555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.538581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.538599 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:37Z","lastTransitionTime":"2025-12-04T06:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.642627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.642683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.642698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.642721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.642739 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:37Z","lastTransitionTime":"2025-12-04T06:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.710466 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:37 crc kubenswrapper[4832]: E1204 06:10:37.711319 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.745738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.745808 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.745830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.745860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.745885 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:37Z","lastTransitionTime":"2025-12-04T06:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.849515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.850003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.850092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.850182 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.850256 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:37Z","lastTransitionTime":"2025-12-04T06:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.953177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.953219 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.953228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.953243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:37 crc kubenswrapper[4832]: I1204 06:10:37.953255 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:37Z","lastTransitionTime":"2025-12-04T06:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.056115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.056455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.056575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.056697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.056840 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:38Z","lastTransitionTime":"2025-12-04T06:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.159280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.159360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.159380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.159436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.159454 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:38Z","lastTransitionTime":"2025-12-04T06:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.251713 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.251765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.251787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.251806 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.251819 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T06:10:38Z","lastTransitionTime":"2025-12-04T06:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.295213 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m"] Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.295925 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.297822 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.298022 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.298302 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.298607 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.327520 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.327504208 podStartE2EDuration="42.327504208s" podCreationTimestamp="2025-12-04 06:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:10:38.313345233 +0000 UTC m=+93.926162949" watchObservedRunningTime="2025-12-04 06:10:38.327504208 +0000 UTC m=+93.940321914" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.331861 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.331900 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.331941 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.331960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.332017 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.406888 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=73.406868576 podStartE2EDuration="1m13.406868576s" podCreationTimestamp="2025-12-04 06:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:10:38.390856626 +0000 UTC m=+94.003674352" watchObservedRunningTime="2025-12-04 06:10:38.406868576 +0000 UTC m=+94.019686282" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.417297 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-97mnv" podStartSLOduration=75.417281442 podStartE2EDuration="1m15.417281442s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:10:38.416573256 +0000 UTC m=+94.029390972" watchObservedRunningTime="2025-12-04 06:10:38.417281442 +0000 UTC m=+94.030099148" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.433288 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.433347 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.433367 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.433421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.433438 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.434354 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.434436 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.434712 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.440063 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.449102 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18e38367-3360-4d1b-b3f3-4fa9fe2e29ad-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sgl2m\" (UID: \"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.452877 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.452859664 podStartE2EDuration="1m16.452859664s" podCreationTimestamp="2025-12-04 06:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:10:38.452471415 +0000 UTC m=+94.065289131" watchObservedRunningTime="2025-12-04 06:10:38.452859664 +0000 UTC m=+94.065677370" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.493888 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.493871615 podStartE2EDuration="28.493871615s" podCreationTimestamp="2025-12-04 06:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:10:38.493573969 +0000 UTC m=+94.106391675" watchObservedRunningTime="2025-12-04 06:10:38.493871615 +0000 UTC m=+94.106689321" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.523333 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jg77n" podStartSLOduration=75.523310162 podStartE2EDuration="1m15.523310162s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:10:38.522719108 +0000 UTC m=+94.135536844" watchObservedRunningTime="2025-12-04 06:10:38.523310162 +0000 UTC m=+94.136127888" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.524436 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podStartSLOduration=75.524413059 podStartE2EDuration="1m15.524413059s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:10:38.506599937 +0000 UTC m=+94.119417653" watchObservedRunningTime="2025-12-04 06:10:38.524413059 +0000 UTC m=+94.137230775" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.538526 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9nl9n" podStartSLOduration=75.538506242 podStartE2EDuration="1m15.538506242s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:10:38.538077941 +0000 UTC m=+94.150895647" watchObservedRunningTime="2025-12-04 06:10:38.538506242 +0000 UTC m=+94.151323948" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.557992 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dqplg" podStartSLOduration=75.557974293 podStartE2EDuration="1m15.557974293s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:10:38.547415633 +0000 UTC m=+94.160233339" watchObservedRunningTime="2025-12-04 06:10:38.557974293 +0000 UTC m=+94.170791999" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.558230 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ss7ls" podStartSLOduration=74.558223839 podStartE2EDuration="1m14.558223839s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:10:38.557622754 +0000 UTC m=+94.170440460" watchObservedRunningTime="2025-12-04 06:10:38.558223839 +0000 UTC m=+94.171041545" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.612570 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.709506 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.709573 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:38 crc kubenswrapper[4832]: E1204 06:10:38.709961 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:38 crc kubenswrapper[4832]: I1204 06:10:38.709572 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:38 crc kubenswrapper[4832]: E1204 06:10:38.710020 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:38 crc kubenswrapper[4832]: E1204 06:10:38.710066 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:39 crc kubenswrapper[4832]: I1204 06:10:39.179880 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" event={"ID":"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad","Type":"ContainerStarted","Data":"35a934439842ea8f8cdb395c8ddafc34738075c8f3d686c9e94ea9b78d546ea0"} Dec 04 06:10:39 crc kubenswrapper[4832]: I1204 06:10:39.179953 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" event={"ID":"18e38367-3360-4d1b-b3f3-4fa9fe2e29ad","Type":"ContainerStarted","Data":"013959f68208de2be78fea6eece555539c8adc9c133c9e4ce8511881edd4142a"} Dec 04 06:10:39 crc kubenswrapper[4832]: I1204 06:10:39.202498 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sgl2m" podStartSLOduration=76.202478807 podStartE2EDuration="1m16.202478807s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:10:39.202467327 +0000 UTC m=+94.815285063" watchObservedRunningTime="2025-12-04 06:10:39.202478807 +0000 UTC m=+94.815296553" Dec 04 06:10:39 crc kubenswrapper[4832]: I1204 06:10:39.709592 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:39 crc kubenswrapper[4832]: E1204 06:10:39.709753 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:40 crc kubenswrapper[4832]: I1204 06:10:40.711829 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:40 crc kubenswrapper[4832]: I1204 06:10:40.711843 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:40 crc kubenswrapper[4832]: I1204 06:10:40.711924 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:40 crc kubenswrapper[4832]: E1204 06:10:40.712254 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:40 crc kubenswrapper[4832]: E1204 06:10:40.712349 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:40 crc kubenswrapper[4832]: E1204 06:10:40.712451 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:41 crc kubenswrapper[4832]: I1204 06:10:41.710566 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:41 crc kubenswrapper[4832]: E1204 06:10:41.710997 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:41 crc kubenswrapper[4832]: I1204 06:10:41.711206 4832 scope.go:117] "RemoveContainer" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" Dec 04 06:10:41 crc kubenswrapper[4832]: E1204 06:10:41.711392 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" Dec 04 06:10:41 crc kubenswrapper[4832]: I1204 06:10:41.726849 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 04 06:10:41 crc kubenswrapper[4832]: I1204 06:10:41.765987 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:41 crc kubenswrapper[4832]: E1204 06:10:41.766125 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:10:41 crc kubenswrapper[4832]: E1204 06:10:41.766181 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs podName:37ab4745-26f8-4cb8-a4c4-c3064251922e nodeName:}" failed. No retries permitted until 2025-12-04 06:11:45.766162705 +0000 UTC m=+161.378980411 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs") pod "network-metrics-daemon-ctzsn" (UID: "37ab4745-26f8-4cb8-a4c4-c3064251922e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 06:10:42 crc kubenswrapper[4832]: I1204 06:10:42.710605 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:42 crc kubenswrapper[4832]: I1204 06:10:42.710682 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:42 crc kubenswrapper[4832]: I1204 06:10:42.710704 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:42 crc kubenswrapper[4832]: E1204 06:10:42.710757 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:42 crc kubenswrapper[4832]: E1204 06:10:42.710874 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:42 crc kubenswrapper[4832]: E1204 06:10:42.711006 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:43 crc kubenswrapper[4832]: I1204 06:10:43.709886 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:43 crc kubenswrapper[4832]: E1204 06:10:43.709995 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:44 crc kubenswrapper[4832]: I1204 06:10:44.709919 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:44 crc kubenswrapper[4832]: I1204 06:10:44.709989 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:44 crc kubenswrapper[4832]: E1204 06:10:44.711278 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:44 crc kubenswrapper[4832]: I1204 06:10:44.711296 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:44 crc kubenswrapper[4832]: E1204 06:10:44.711488 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:44 crc kubenswrapper[4832]: E1204 06:10:44.711567 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:45 crc kubenswrapper[4832]: I1204 06:10:45.709879 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:45 crc kubenswrapper[4832]: E1204 06:10:45.709985 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:46 crc kubenswrapper[4832]: I1204 06:10:46.710538 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:46 crc kubenswrapper[4832]: I1204 06:10:46.710586 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:46 crc kubenswrapper[4832]: I1204 06:10:46.710538 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:46 crc kubenswrapper[4832]: E1204 06:10:46.710747 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:46 crc kubenswrapper[4832]: E1204 06:10:46.710678 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:46 crc kubenswrapper[4832]: E1204 06:10:46.710864 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:47 crc kubenswrapper[4832]: I1204 06:10:47.709878 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:47 crc kubenswrapper[4832]: E1204 06:10:47.710138 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:48 crc kubenswrapper[4832]: I1204 06:10:48.709780 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:48 crc kubenswrapper[4832]: I1204 06:10:48.709788 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:48 crc kubenswrapper[4832]: E1204 06:10:48.709908 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:48 crc kubenswrapper[4832]: I1204 06:10:48.709927 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:48 crc kubenswrapper[4832]: E1204 06:10:48.710043 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:48 crc kubenswrapper[4832]: E1204 06:10:48.710149 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:49 crc kubenswrapper[4832]: I1204 06:10:49.709510 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:49 crc kubenswrapper[4832]: E1204 06:10:49.709851 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:50 crc kubenswrapper[4832]: I1204 06:10:50.709981 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:50 crc kubenswrapper[4832]: E1204 06:10:50.710156 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:50 crc kubenswrapper[4832]: I1204 06:10:50.710499 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:50 crc kubenswrapper[4832]: E1204 06:10:50.710606 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:50 crc kubenswrapper[4832]: I1204 06:10:50.710672 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:50 crc kubenswrapper[4832]: E1204 06:10:50.710790 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:51 crc kubenswrapper[4832]: I1204 06:10:51.710299 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:51 crc kubenswrapper[4832]: E1204 06:10:51.710557 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:52 crc kubenswrapper[4832]: I1204 06:10:52.709974 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:52 crc kubenswrapper[4832]: E1204 06:10:52.710108 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:52 crc kubenswrapper[4832]: I1204 06:10:52.710116 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:52 crc kubenswrapper[4832]: I1204 06:10:52.710216 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:52 crc kubenswrapper[4832]: E1204 06:10:52.710255 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:52 crc kubenswrapper[4832]: E1204 06:10:52.710454 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:52 crc kubenswrapper[4832]: I1204 06:10:52.711267 4832 scope.go:117] "RemoveContainer" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" Dec 04 06:10:52 crc kubenswrapper[4832]: E1204 06:10:52.711497 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zdmhj_openshift-ovn-kubernetes(c442d280-de5c-4240-90b3-af48bbb2f1c5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" Dec 04 06:10:53 crc kubenswrapper[4832]: I1204 06:10:53.710581 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:53 crc kubenswrapper[4832]: E1204 06:10:53.710926 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:54 crc kubenswrapper[4832]: I1204 06:10:54.709879 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:54 crc kubenswrapper[4832]: I1204 06:10:54.709961 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:54 crc kubenswrapper[4832]: I1204 06:10:54.711436 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:54 crc kubenswrapper[4832]: E1204 06:10:54.711437 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:54 crc kubenswrapper[4832]: E1204 06:10:54.711489 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:54 crc kubenswrapper[4832]: E1204 06:10:54.711566 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:55 crc kubenswrapper[4832]: I1204 06:10:55.710196 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:55 crc kubenswrapper[4832]: E1204 06:10:55.710614 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:56 crc kubenswrapper[4832]: I1204 06:10:56.709792 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:56 crc kubenswrapper[4832]: I1204 06:10:56.709860 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:56 crc kubenswrapper[4832]: E1204 06:10:56.709937 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:56 crc kubenswrapper[4832]: I1204 06:10:56.709874 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:56 crc kubenswrapper[4832]: E1204 06:10:56.709997 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:56 crc kubenswrapper[4832]: E1204 06:10:56.710054 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:57 crc kubenswrapper[4832]: I1204 06:10:57.709760 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:57 crc kubenswrapper[4832]: E1204 06:10:57.709886 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:10:58 crc kubenswrapper[4832]: I1204 06:10:58.235064 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nl9n_325cffd3-4d6a-4916-8ad9-743cdc486769/kube-multus/1.log" Dec 04 06:10:58 crc kubenswrapper[4832]: I1204 06:10:58.235699 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nl9n_325cffd3-4d6a-4916-8ad9-743cdc486769/kube-multus/0.log" Dec 04 06:10:58 crc kubenswrapper[4832]: I1204 06:10:58.235792 4832 generic.go:334] "Generic (PLEG): container finished" podID="325cffd3-4d6a-4916-8ad9-743cdc486769" containerID="cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8" exitCode=1 Dec 04 06:10:58 crc kubenswrapper[4832]: I1204 06:10:58.235838 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nl9n" event={"ID":"325cffd3-4d6a-4916-8ad9-743cdc486769","Type":"ContainerDied","Data":"cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8"} Dec 04 06:10:58 crc kubenswrapper[4832]: I1204 06:10:58.235894 4832 scope.go:117] "RemoveContainer" containerID="145afbbc0154e6d86d2072c5519ce88c153f30c1e3b97c48f2d4acac3c1d19cf" Dec 04 06:10:58 crc kubenswrapper[4832]: I1204 06:10:58.236616 4832 scope.go:117] "RemoveContainer" containerID="cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8" Dec 04 06:10:58 crc kubenswrapper[4832]: E1204 06:10:58.236930 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9nl9n_openshift-multus(325cffd3-4d6a-4916-8ad9-743cdc486769)\"" pod="openshift-multus/multus-9nl9n" podUID="325cffd3-4d6a-4916-8ad9-743cdc486769" Dec 04 06:10:58 crc kubenswrapper[4832]: I1204 06:10:58.259094 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=17.259053699 podStartE2EDuration="17.259053699s" podCreationTimestamp="2025-12-04 06:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:10:44.749446323 +0000 UTC m=+100.362264049" watchObservedRunningTime="2025-12-04 06:10:58.259053699 +0000 UTC m=+113.871871405" Dec 04 06:10:58 crc kubenswrapper[4832]: I1204 06:10:58.710189 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:10:58 crc kubenswrapper[4832]: I1204 06:10:58.710250 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:10:58 crc kubenswrapper[4832]: I1204 06:10:58.710189 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:10:58 crc kubenswrapper[4832]: E1204 06:10:58.710302 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:10:58 crc kubenswrapper[4832]: E1204 06:10:58.710486 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:10:58 crc kubenswrapper[4832]: E1204 06:10:58.710644 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:10:59 crc kubenswrapper[4832]: I1204 06:10:59.240423 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nl9n_325cffd3-4d6a-4916-8ad9-743cdc486769/kube-multus/1.log" Dec 04 06:10:59 crc kubenswrapper[4832]: I1204 06:10:59.710028 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:10:59 crc kubenswrapper[4832]: E1204 06:10:59.710160 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:11:00 crc kubenswrapper[4832]: I1204 06:11:00.709551 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:00 crc kubenswrapper[4832]: E1204 06:11:00.709663 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:11:00 crc kubenswrapper[4832]: I1204 06:11:00.709836 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:11:00 crc kubenswrapper[4832]: E1204 06:11:00.709878 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:11:00 crc kubenswrapper[4832]: I1204 06:11:00.709975 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:00 crc kubenswrapper[4832]: E1204 06:11:00.710012 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:11:01 crc kubenswrapper[4832]: I1204 06:11:01.709856 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:11:01 crc kubenswrapper[4832]: E1204 06:11:01.710006 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:11:02 crc kubenswrapper[4832]: I1204 06:11:02.710417 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:11:02 crc kubenswrapper[4832]: E1204 06:11:02.710994 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:11:02 crc kubenswrapper[4832]: I1204 06:11:02.710513 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:02 crc kubenswrapper[4832]: E1204 06:11:02.711261 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:11:02 crc kubenswrapper[4832]: I1204 06:11:02.710482 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:02 crc kubenswrapper[4832]: E1204 06:11:02.711529 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:11:03 crc kubenswrapper[4832]: I1204 06:11:03.709747 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:11:03 crc kubenswrapper[4832]: E1204 06:11:03.709941 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:11:04 crc kubenswrapper[4832]: E1204 06:11:04.662220 4832 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 04 06:11:04 crc kubenswrapper[4832]: I1204 06:11:04.710127 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:04 crc kubenswrapper[4832]: I1204 06:11:04.710207 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:04 crc kubenswrapper[4832]: I1204 06:11:04.710171 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:11:04 crc kubenswrapper[4832]: E1204 06:11:04.712081 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:11:04 crc kubenswrapper[4832]: E1204 06:11:04.712194 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:11:04 crc kubenswrapper[4832]: E1204 06:11:04.712534 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:11:04 crc kubenswrapper[4832]: E1204 06:11:04.805903 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 04 06:11:05 crc kubenswrapper[4832]: I1204 06:11:05.710206 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:11:05 crc kubenswrapper[4832]: E1204 06:11:05.710646 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:11:06 crc kubenswrapper[4832]: I1204 06:11:06.710576 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:11:06 crc kubenswrapper[4832]: I1204 06:11:06.710612 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:06 crc kubenswrapper[4832]: I1204 06:11:06.710588 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:06 crc kubenswrapper[4832]: E1204 06:11:06.710776 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:11:06 crc kubenswrapper[4832]: E1204 06:11:06.711052 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:11:06 crc kubenswrapper[4832]: E1204 06:11:06.711221 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:11:06 crc kubenswrapper[4832]: I1204 06:11:06.711606 4832 scope.go:117] "RemoveContainer" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" Dec 04 06:11:07 crc kubenswrapper[4832]: I1204 06:11:07.265335 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/3.log" Dec 04 06:11:07 crc kubenswrapper[4832]: I1204 06:11:07.267946 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerStarted","Data":"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7"} Dec 04 06:11:07 crc kubenswrapper[4832]: I1204 06:11:07.268419 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:11:07 crc kubenswrapper[4832]: I1204 06:11:07.294086 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podStartSLOduration=103.294071542 podStartE2EDuration="1m43.294071542s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:07.293507329 +0000 UTC m=+122.906325045" watchObservedRunningTime="2025-12-04 06:11:07.294071542 +0000 UTC m=+122.906889248" Dec 04 06:11:07 crc kubenswrapper[4832]: I1204 06:11:07.482675 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ctzsn"] Dec 04 06:11:07 crc kubenswrapper[4832]: I1204 06:11:07.482788 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:11:07 crc kubenswrapper[4832]: E1204 06:11:07.482884 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:11:08 crc kubenswrapper[4832]: I1204 06:11:08.710425 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:08 crc kubenswrapper[4832]: I1204 06:11:08.710898 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:11:08 crc kubenswrapper[4832]: I1204 06:11:08.710942 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:08 crc kubenswrapper[4832]: I1204 06:11:08.711057 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:11:08 crc kubenswrapper[4832]: E1204 06:11:08.711157 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:11:08 crc kubenswrapper[4832]: E1204 06:11:08.711711 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:11:08 crc kubenswrapper[4832]: E1204 06:11:08.712170 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:11:08 crc kubenswrapper[4832]: E1204 06:11:08.712605 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:11:09 crc kubenswrapper[4832]: E1204 06:11:09.808082 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 04 06:11:10 crc kubenswrapper[4832]: I1204 06:11:10.709718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:11:10 crc kubenswrapper[4832]: I1204 06:11:10.709764 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:10 crc kubenswrapper[4832]: I1204 06:11:10.709804 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:10 crc kubenswrapper[4832]: I1204 06:11:10.709718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:11:10 crc kubenswrapper[4832]: E1204 06:11:10.709854 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:11:10 crc kubenswrapper[4832]: E1204 06:11:10.709977 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:11:10 crc kubenswrapper[4832]: E1204 06:11:10.710094 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:11:10 crc kubenswrapper[4832]: E1204 06:11:10.710178 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:11:12 crc kubenswrapper[4832]: I1204 06:11:12.709962 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:12 crc kubenswrapper[4832]: I1204 06:11:12.709962 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:11:12 crc kubenswrapper[4832]: E1204 06:11:12.710274 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:11:12 crc kubenswrapper[4832]: I1204 06:11:12.710313 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:12 crc kubenswrapper[4832]: I1204 06:11:12.710337 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:11:12 crc kubenswrapper[4832]: I1204 06:11:12.710373 4832 scope.go:117] "RemoveContainer" containerID="cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8" Dec 04 06:11:12 crc kubenswrapper[4832]: E1204 06:11:12.710598 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:11:12 crc kubenswrapper[4832]: E1204 06:11:12.710744 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:11:12 crc kubenswrapper[4832]: E1204 06:11:12.710380 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:11:13 crc kubenswrapper[4832]: I1204 06:11:13.292674 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nl9n_325cffd3-4d6a-4916-8ad9-743cdc486769/kube-multus/1.log" Dec 04 06:11:13 crc kubenswrapper[4832]: I1204 06:11:13.292959 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nl9n" event={"ID":"325cffd3-4d6a-4916-8ad9-743cdc486769","Type":"ContainerStarted","Data":"884e91c9ce60008aa03c7bf5ca552038900a5fc445619ae1247a88ea68ff4873"} Dec 04 06:11:14 crc kubenswrapper[4832]: I1204 06:11:14.710176 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:14 crc kubenswrapper[4832]: I1204 06:11:14.710370 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:11:14 crc kubenswrapper[4832]: E1204 06:11:14.711755 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 06:11:14 crc kubenswrapper[4832]: I1204 06:11:14.711781 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:11:14 crc kubenswrapper[4832]: I1204 06:11:14.711809 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:14 crc kubenswrapper[4832]: E1204 06:11:14.711915 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 06:11:14 crc kubenswrapper[4832]: E1204 06:11:14.712009 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ctzsn" podUID="37ab4745-26f8-4cb8-a4c4-c3064251922e" Dec 04 06:11:14 crc kubenswrapper[4832]: E1204 06:11:14.712080 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 06:11:16 crc kubenswrapper[4832]: I1204 06:11:16.710275 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:11:16 crc kubenswrapper[4832]: I1204 06:11:16.710315 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:11:16 crc kubenswrapper[4832]: I1204 06:11:16.710316 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:16 crc kubenswrapper[4832]: I1204 06:11:16.710345 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:16 crc kubenswrapper[4832]: I1204 06:11:16.713622 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 04 06:11:16 crc kubenswrapper[4832]: I1204 06:11:16.713774 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 04 06:11:16 crc kubenswrapper[4832]: I1204 06:11:16.713779 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 04 06:11:16 crc kubenswrapper[4832]: I1204 06:11:16.714843 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 04 06:11:16 crc kubenswrapper[4832]: I1204 06:11:16.714876 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 04 06:11:16 crc kubenswrapper[4832]: I1204 06:11:16.714913 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.027975 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.064857 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.065410 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.065752 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vcj7x"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.066237 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.066770 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.067319 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.069681 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pqfp2"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.070038 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cpzbl"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.070288 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wtnbm"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.070791 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.071166 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.071483 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.077343 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.078529 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.085625 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gcvsv"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.086284 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.089078 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.089082 4832 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.089157 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.089732 4832 reflector.go:561] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": failed to list *v1.Secret: secrets "oauth-openshift-dockercfg-znhcc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.089789 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-openshift-dockercfg-znhcc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.089864 4832 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-session": failed to list *v1.Secret: secrets "v4-0-config-system-session" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.089878 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-session\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.089913 4832 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: secrets "v4-0-config-system-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.089927 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.089962 4832 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": failed to list *v1.Secret: secrets "v4-0-config-system-ocp-branding-template" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.089975 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-ocp-branding-template\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.090025 4832 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: secrets "v4-0-config-user-template-provider-selection" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.090037 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-provider-selection\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.090070 4832 reflector.go:561] object-"openshift-authentication"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.090081 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.090115 4832 reflector.go:561] object-"openshift-authentication"/"audit": failed to list *v1.ConfigMap: configmaps "audit" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.090127 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.090163 4832 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-error": failed to list *v1.Secret: secrets "v4-0-config-user-template-error" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.090176 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-error\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.090215 4832 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": failed to list *v1.Secret: secrets "v4-0-config-user-idp-0-file-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.090229 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-idp-0-file-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.090262 4832 reflector.go:561] object-"openshift-authentication"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.090275 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.090314 4832 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-router-certs": failed to list *v1.Secret: secrets "v4-0-config-system-router-certs" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.090326 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-router-certs\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.090364 4832 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: secrets "v4-0-config-user-template-login" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.090377 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-login\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.090461 4832 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "v4-0-config-system-trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.090474 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"v4-0-config-system-trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.090509 4832 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-service-ca": failed to list *v1.ConfigMap: configmaps "v4-0-config-system-service-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.090521 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"v4-0-config-system-service-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.090628 4832 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-cliconfig": failed to list *v1.ConfigMap: configmaps "v4-0-config-system-cliconfig" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.090643 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"v4-0-config-system-cliconfig\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.090689 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.091043 4832 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: secrets "machine-approver-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.091060 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.091224 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.091323 4832 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.091338 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.091378 4832 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: configmaps "authentication-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.091408 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"authentication-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.091467 4832 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.091478 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.091514 4832 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.091527 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.091563 4832 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.091576 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.091658 4832 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.091694 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.091762 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.091783 4832 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.091808 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.091881 4832 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.091904 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.091972 4832 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.091994 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.092110 4832 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.092129 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.092172 4832 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: secrets "oauth-apiserver-sa-dockercfg-6r2bq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.092186 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-apiserver-sa-dockercfg-6r2bq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.092170 4832 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.092211 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.092255 4832 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.092271 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.092255 4832 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.092301 4832 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.092333 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.092300 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.092356 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.092496 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.092560 4832 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.092593 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.092682 4832 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.092703 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.092723 4832 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.092737 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.093016 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.093178 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.093325 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.093465 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.093589 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.093701 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.093729 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.093731 4832 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.093783 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.093822 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.097488 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.121272 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-n68j8"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.121697 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.122010 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.125487 4832 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.125558 4832 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.132496 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.132549 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.132589 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.132661 4832 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.132690 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.132690 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.132701 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.125600 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.132499 4832 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.132958 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.132507 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: W1204 06:11:19.132980 4832 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Dec 04 06:11:19 crc kubenswrapper[4832]: E1204 06:11:19.133001 4832 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.132665 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.133600 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.133708 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.133736 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.133800 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tw7nf"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.133987 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.134180 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tw7nf" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.135031 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.135452 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.135693 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.135895 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.136261 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.136313 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.136580 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.137266 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.137356 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-g2thm"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.137765 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.137782 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.138817 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.140211 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9chqb"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.140854 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b214f93-e9ab-4500-9c6b-6319c5570459-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gcvsv\" (UID: \"8b214f93-e9ab-4500-9c6b-6319c5570459\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.140902 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f23bd041-73a3-4443-869f-b7d6221d3763-trusted-ca\") pod \"console-operator-58897d9998-n68j8\" (UID: \"f23bd041-73a3-4443-869f-b7d6221d3763\") " pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.140930 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-serving-cert\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.140946 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-766fs"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.140951 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-config\") pod \"route-controller-manager-6576b87f9c-lhfgj\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141114 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141152 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141186 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141223 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141251 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141286 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b214f93-e9ab-4500-9c6b-6319c5570459-images\") pod \"machine-api-operator-5694c8668f-gcvsv\" (UID: \"8b214f93-e9ab-4500-9c6b-6319c5570459\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141329 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/21e75fec-8174-41c0-82b1-a01786d46246-node-pullsecrets\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141356 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-serving-cert\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141382 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-service-ca-bundle\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141429 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141454 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-config\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141482 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141518 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-etcd-client\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141544 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-294pt\" (UniqueName: \"kubernetes.io/projected/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-kube-api-access-294pt\") pod \"route-controller-manager-6576b87f9c-lhfgj\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141576 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-config\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141600 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqscq\" (UniqueName: \"kubernetes.io/projected/f23bd041-73a3-4443-869f-b7d6221d3763-kube-api-access-bqscq\") pod \"console-operator-58897d9998-n68j8\" (UID: \"f23bd041-73a3-4443-869f-b7d6221d3763\") " pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141626 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141651 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96d01e99-95dd-4969-ab53-f94c7383886f-serving-cert\") pod \"openshift-config-operator-7777fb866f-r6f6g\" (UID: \"96d01e99-95dd-4969-ab53-f94c7383886f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141676 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-client-ca\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141698 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f23bd041-73a3-4443-869f-b7d6221d3763-serving-cert\") pod \"console-operator-58897d9998-n68j8\" (UID: \"f23bd041-73a3-4443-869f-b7d6221d3763\") " pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141757 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141805 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-client\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a07eda47-4b27-4396-90a1-a6a1569a6f99-audit-dir\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141879 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141900 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141935 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.141960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142007 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-client-ca\") pod \"route-controller-manager-6576b87f9c-lhfgj\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142027 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-766fs" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142031 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-audit-policies\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142291 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1bc185a-fac5-4103-947a-d3d660802249-audit-dir\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142312 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142338 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b2ac879-133d-44de-8d0e-df502cc87c55-auth-proxy-config\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142356 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b214f93-e9ab-4500-9c6b-6319c5570459-config\") pod \"machine-api-operator-5694c8668f-gcvsv\" (UID: \"8b214f93-e9ab-4500-9c6b-6319c5570459\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142375 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4rlf\" (UniqueName: \"kubernetes.io/projected/21e75fec-8174-41c0-82b1-a01786d46246-kube-api-access-r4rlf\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142412 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-config\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142430 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d2rth"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142296 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142602 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142433 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2ac879-133d-44de-8d0e-df502cc87c55-config\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142876 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hmhf\" (UniqueName: \"kubernetes.io/projected/a07eda47-4b27-4396-90a1-a6a1569a6f99-kube-api-access-2hmhf\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142807 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142955 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.142986 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c59nx\" (UniqueName: \"kubernetes.io/projected/8b214f93-e9ab-4500-9c6b-6319c5570459-kube-api-access-c59nx\") pod \"machine-api-operator-5694c8668f-gcvsv\" (UID: \"8b214f93-e9ab-4500-9c6b-6319c5570459\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143007 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143029 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-encryption-config\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143051 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmkk2\" (UniqueName: \"kubernetes.io/projected/96d01e99-95dd-4969-ab53-f94c7383886f-kube-api-access-fmkk2\") pod \"openshift-config-operator-7777fb866f-r6f6g\" (UID: \"96d01e99-95dd-4969-ab53-f94c7383886f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143070 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-serving-cert\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143086 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e212703-f85d-4128-bbff-a3057263d6d3-serving-cert\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143108 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhz4r\" (UniqueName: \"kubernetes.io/projected/5b2ac879-133d-44de-8d0e-df502cc87c55-kube-api-access-qhz4r\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143127 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-etcd-serving-ca\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143144 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-audit\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143160 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-encryption-config\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143177 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7kwt\" (UniqueName: \"kubernetes.io/projected/3e212703-f85d-4128-bbff-a3057263d6d3-kube-api-access-r7kwt\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143198 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-image-import-ca\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143217 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e75fec-8174-41c0-82b1-a01786d46246-audit-dir\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143253 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143284 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f23bd041-73a3-4443-869f-b7d6221d3763-config\") pod \"console-operator-58897d9998-n68j8\" (UID: \"f23bd041-73a3-4443-869f-b7d6221d3763\") " pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b2ac879-133d-44de-8d0e-df502cc87c55-machine-approver-tls\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143375 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.143432 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-serving-cert\") pod \"route-controller-manager-6576b87f9c-lhfgj\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.144030 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/96d01e99-95dd-4969-ab53-f94c7383886f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r6f6g\" (UID: \"96d01e99-95dd-4969-ab53-f94c7383886f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.144057 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfpld\" (UniqueName: \"kubernetes.io/projected/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-kube-api-access-xfpld\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.144091 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-audit-policies\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.144116 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gmcc\" (UniqueName: \"kubernetes.io/projected/d1bc185a-fac5-4103-947a-d3d660802249-kube-api-access-5gmcc\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.144350 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.145418 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.151313 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.151627 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.151800 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.153319 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.154121 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.154594 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.154965 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.156921 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.157177 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.157568 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.157580 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.158343 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.159242 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.163120 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.163309 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.163467 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.163758 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.164176 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.164216 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.164349 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.164468 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.164478 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.164561 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.164605 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.164358 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.164720 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.164890 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.168095 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-stxqv"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.169360 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.169747 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.170152 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.170278 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.170346 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.170647 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-stxqv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.200541 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.200743 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.200888 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.202347 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.205972 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.206822 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.207184 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.215098 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.217223 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.217260 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.220128 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.220641 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.222749 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.223357 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-62qg2"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.223600 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.223734 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.225601 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.226251 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.226491 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.226554 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9nf4f"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.226908 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.227232 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.229117 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.239727 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wr2lk"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.239985 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nf4f" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.240195 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.240730 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.241119 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.241513 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.241528 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.241617 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.241669 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.241709 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.242424 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jdgxv"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.242534 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.242748 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pqqsl"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.242898 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.243100 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pqfp2"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.243115 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wtnbm"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.243124 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cpzbl"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.243134 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-snlqx"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.243186 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244053 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244142 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-snlqx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244535 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vcj7x"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244813 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a423478-8008-4169-a257-ee5b0701c460-config\") pod \"service-ca-operator-777779d784-62qg2\" (UID: \"4a423478-8008-4169-a257-ee5b0701c460\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244846 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48194ca4-ce47-4309-a085-339c9b14f42b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nbpnv\" (UID: \"48194ca4-ce47-4309-a085-339c9b14f42b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244897 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244912 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244928 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5zmrn\" (UID: \"4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244945 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244960 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-audit-policies\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244978 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-config\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.244997 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245013 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ab7979f-0ea9-471f-a71a-75f869d58f14-webhook-cert\") pod \"packageserver-d55dfcdfc-nmzfk\" (UID: \"0ab7979f-0ea9-471f-a71a-75f869d58f14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245031 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j647d\" (UniqueName: \"kubernetes.io/projected/ce327d25-76b0-4c8b-a163-05f4e0976c34-kube-api-access-j647d\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245049 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hmhf\" (UniqueName: \"kubernetes.io/projected/a07eda47-4b27-4396-90a1-a6a1569a6f99-kube-api-access-2hmhf\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245066 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245085 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/361acab0-1cd6-48fc-b6ef-c77dc3092f98-metrics-tls\") pod \"dns-operator-744455d44c-766fs\" (UID: \"361acab0-1cd6-48fc-b6ef-c77dc3092f98\") " pod="openshift-dns-operator/dns-operator-744455d44c-766fs" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245102 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c52abcc0-4f0f-4094-9cbb-3bbad9978f53-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xzjgl\" (UID: \"c52abcc0-4f0f-4094-9cbb-3bbad9978f53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245125 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmkk2\" (UniqueName: \"kubernetes.io/projected/96d01e99-95dd-4969-ab53-f94c7383886f-kube-api-access-fmkk2\") pod \"openshift-config-operator-7777fb866f-r6f6g\" (UID: \"96d01e99-95dd-4969-ab53-f94c7383886f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245140 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c4dc6e5-57fe-454f-89e6-7c37768004b4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sqv98\" (UID: \"3c4dc6e5-57fe-454f-89e6-7c37768004b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245155 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsw24\" (UniqueName: \"kubernetes.io/projected/a9d3c038-c11e-4925-8802-5c8b57b1aeef-kube-api-access-gsw24\") pod \"openshift-controller-manager-operator-756b6f6bc6-hvs6v\" (UID: \"a9d3c038-c11e-4925-8802-5c8b57b1aeef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245171 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e212703-f85d-4128-bbff-a3057263d6d3-serving-cert\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245317 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-serving-cert\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245348 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-etcd-serving-ca\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245371 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz4r\" (UniqueName: \"kubernetes.io/projected/5b2ac879-133d-44de-8d0e-df502cc87c55-kube-api-access-qhz4r\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245422 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ce327d25-76b0-4c8b-a163-05f4e0976c34-etcd-client\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245443 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4de5b92-e3b3-480f-9241-23e3603eaff2-srv-cert\") pod \"catalog-operator-68c6474976-x48rh\" (UID: \"d4de5b92-e3b3-480f-9241-23e3603eaff2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245461 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d4de5b92-e3b3-480f-9241-23e3603eaff2-profile-collector-cert\") pod \"catalog-operator-68c6474976-x48rh\" (UID: \"d4de5b92-e3b3-480f-9241-23e3603eaff2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245481 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-audit\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245505 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7kwt\" (UniqueName: \"kubernetes.io/projected/3e212703-f85d-4128-bbff-a3057263d6d3-kube-api-access-r7kwt\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245525 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce327d25-76b0-4c8b-a163-05f4e0976c34-serving-cert\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ce327d25-76b0-4c8b-a163-05f4e0976c34-etcd-ca\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245568 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67gpf\" (UniqueName: \"kubernetes.io/projected/4a423478-8008-4169-a257-ee5b0701c460-kube-api-access-67gpf\") pod \"service-ca-operator-777779d784-62qg2\" (UID: \"4a423478-8008-4169-a257-ee5b0701c460\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245590 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3ac11867-dffb-4aa1-88ba-d607d5d6f97a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-stxqv\" (UID: \"3ac11867-dffb-4aa1-88ba-d607d5d6f97a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-stxqv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245611 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-image-import-ca\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245624 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tw7nf"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245631 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d3c038-c11e-4925-8802-5c8b57b1aeef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hvs6v\" (UID: \"a9d3c038-c11e-4925-8802-5c8b57b1aeef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245650 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245668 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7fg2\" (UniqueName: \"kubernetes.io/projected/d4de5b92-e3b3-480f-9241-23e3603eaff2-kube-api-access-k7fg2\") pod \"catalog-operator-68c6474976-x48rh\" (UID: \"d4de5b92-e3b3-480f-9241-23e3603eaff2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245700 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecf4e5af-74f0-44c8-9231-0719fa4d0f16-proxy-tls\") pod \"machine-config-controller-84d6567774-lwnml\" (UID: \"ecf4e5af-74f0-44c8-9231-0719fa4d0f16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245719 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hdrv\" (UniqueName: \"kubernetes.io/projected/35c827cd-26e4-4f7a-ba65-bb717839a8d4-kube-api-access-7hdrv\") pod \"kube-storage-version-migrator-operator-b67b599dd-c7tjz\" (UID: \"35c827cd-26e4-4f7a-ba65-bb717839a8d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245740 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrmt8\" (UniqueName: \"kubernetes.io/projected/50fb7e5f-0fc6-47d2-a953-8fece3489792-kube-api-access-qrmt8\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245694 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245788 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b214f93-e9ab-4500-9c6b-6319c5570459-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gcvsv\" (UID: \"8b214f93-e9ab-4500-9c6b-6319c5570459\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.245832 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f23bd041-73a3-4443-869f-b7d6221d3763-trusted-ca\") pod \"console-operator-58897d9998-n68j8\" (UID: \"f23bd041-73a3-4443-869f-b7d6221d3763\") " pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.246477 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.246815 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr8xx\" (UniqueName: \"kubernetes.io/projected/ecf4e5af-74f0-44c8-9231-0719fa4d0f16-kube-api-access-sr8xx\") pod \"machine-config-controller-84d6567774-lwnml\" (UID: \"ecf4e5af-74f0-44c8-9231-0719fa4d0f16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.246854 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5zmrn\" (UID: \"4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.246877 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jnlc\" (UniqueName: \"kubernetes.io/projected/e57a2b10-8b23-4085-a031-3263b4265ccc-kube-api-access-2jnlc\") pod \"cluster-samples-operator-665b6dd947-k95wg\" (UID: \"e57a2b10-8b23-4085-a031-3263b4265ccc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.246912 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48194ca4-ce47-4309-a085-339c9b14f42b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nbpnv\" (UID: \"48194ca4-ce47-4309-a085-339c9b14f42b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.246934 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.246959 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f23bd041-73a3-4443-869f-b7d6221d3763-trusted-ca\") pod \"console-operator-58897d9998-n68j8\" (UID: \"f23bd041-73a3-4443-869f-b7d6221d3763\") " pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.246967 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-serving-cert\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.247068 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-config\") pod \"route-controller-manager-6576b87f9c-lhfgj\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.247106 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/60021a99-658a-4bde-81c3-dae4f4870628-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-49ckx\" (UID: \"60021a99-658a-4bde-81c3-dae4f4870628\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.247139 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.247166 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.247192 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/21e75fec-8174-41c0-82b1-a01786d46246-node-pullsecrets\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.247260 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/21e75fec-8174-41c0-82b1-a01786d46246-node-pullsecrets\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.247540 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248133 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-config\") pod \"route-controller-manager-6576b87f9c-lhfgj\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.247215 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-serving-cert\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248655 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248684 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7zfl\" (UniqueName: \"kubernetes.io/projected/3ac11867-dffb-4aa1-88ba-d607d5d6f97a-kube-api-access-z7zfl\") pod \"multus-admission-controller-857f4d67dd-stxqv\" (UID: \"3ac11867-dffb-4aa1-88ba-d607d5d6f97a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-stxqv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248715 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-config\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248754 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248781 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-service-ca-bundle\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248805 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248835 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-etcd-client\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-294pt\" (UniqueName: \"kubernetes.io/projected/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-kube-api-access-294pt\") pod \"route-controller-manager-6576b87f9c-lhfgj\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248874 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c4dc6e5-57fe-454f-89e6-7c37768004b4-images\") pod \"machine-config-operator-74547568cd-sqv98\" (UID: \"3c4dc6e5-57fe-454f-89e6-7c37768004b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248896 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr9fp\" (UniqueName: \"kubernetes.io/projected/0ab7979f-0ea9-471f-a71a-75f869d58f14-kube-api-access-wr9fp\") pod \"packageserver-d55dfcdfc-nmzfk\" (UID: \"0ab7979f-0ea9-471f-a71a-75f869d58f14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248912 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c52abcc0-4f0f-4094-9cbb-3bbad9978f53-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xzjgl\" (UID: \"c52abcc0-4f0f-4094-9cbb-3bbad9978f53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248932 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96d01e99-95dd-4969-ab53-f94c7383886f-serving-cert\") pod \"openshift-config-operator-7777fb866f-r6f6g\" (UID: \"96d01e99-95dd-4969-ab53-f94c7383886f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248949 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248965 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kskcm\" (UniqueName: \"kubernetes.io/projected/d35e6baa-6315-48ee-904c-05da7d436283-kube-api-access-kskcm\") pod \"control-plane-machine-set-operator-78cbb6b69f-zzf4r\" (UID: \"d35e6baa-6315-48ee-904c-05da7d436283\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.248984 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249002 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-client-ca\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249017 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f23bd041-73a3-4443-869f-b7d6221d3763-serving-cert\") pod \"console-operator-58897d9998-n68j8\" (UID: \"f23bd041-73a3-4443-869f-b7d6221d3763\") " pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249035 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d35e6baa-6315-48ee-904c-05da7d436283-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zzf4r\" (UID: \"d35e6baa-6315-48ee-904c-05da7d436283\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249055 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-client\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249070 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c827cd-26e4-4f7a-ba65-bb717839a8d4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-c7tjz\" (UID: \"35c827cd-26e4-4f7a-ba65-bb717839a8d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249087 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a07eda47-4b27-4396-90a1-a6a1569a6f99-audit-dir\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249107 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-oauth-config\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249124 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-config\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249139 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/60021a99-658a-4bde-81c3-dae4f4870628-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-49ckx\" (UID: \"60021a99-658a-4bde-81c3-dae4f4870628\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249156 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e57a2b10-8b23-4085-a031-3263b4265ccc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-k95wg\" (UID: \"e57a2b10-8b23-4085-a031-3263b4265ccc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249174 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-client-ca\") pod \"route-controller-manager-6576b87f9c-lhfgj\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1bc185a-fac5-4103-947a-d3d660802249-audit-dir\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249207 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b214f93-e9ab-4500-9c6b-6319c5570459-config\") pod \"machine-api-operator-5694c8668f-gcvsv\" (UID: \"8b214f93-e9ab-4500-9c6b-6319c5570459\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249225 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b2ac879-133d-44de-8d0e-df502cc87c55-auth-proxy-config\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249242 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0ab7979f-0ea9-471f-a71a-75f869d58f14-apiservice-cert\") pod \"packageserver-d55dfcdfc-nmzfk\" (UID: \"0ab7979f-0ea9-471f-a71a-75f869d58f14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249262 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4rlf\" (UniqueName: \"kubernetes.io/projected/21e75fec-8174-41c0-82b1-a01786d46246-kube-api-access-r4rlf\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249280 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2ac879-133d-44de-8d0e-df502cc87c55-config\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249297 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35c827cd-26e4-4f7a-ba65-bb717839a8d4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-c7tjz\" (UID: \"35c827cd-26e4-4f7a-ba65-bb717839a8d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249314 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c59nx\" (UniqueName: \"kubernetes.io/projected/8b214f93-e9ab-4500-9c6b-6319c5570459-kube-api-access-c59nx\") pod \"machine-api-operator-5694c8668f-gcvsv\" (UID: \"8b214f93-e9ab-4500-9c6b-6319c5570459\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249334 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249356 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce327d25-76b0-4c8b-a163-05f4e0976c34-etcd-service-ca\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249364 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1bc185a-fac5-4103-947a-d3d660802249-audit-dir\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249383 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-encryption-config\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249427 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d3c038-c11e-4925-8802-5c8b57b1aeef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hvs6v\" (UID: \"a9d3c038-c11e-4925-8802-5c8b57b1aeef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249443 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqzwt\" (UniqueName: \"kubernetes.io/projected/361acab0-1cd6-48fc-b6ef-c77dc3092f98-kube-api-access-qqzwt\") pod \"dns-operator-744455d44c-766fs\" (UID: \"361acab0-1cd6-48fc-b6ef-c77dc3092f98\") " pod="openshift-dns-operator/dns-operator-744455d44c-766fs" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249463 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-oauth-serving-cert\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249477 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wwqz\" (UniqueName: \"kubernetes.io/projected/3c4dc6e5-57fe-454f-89e6-7c37768004b4-kube-api-access-2wwqz\") pod \"machine-config-operator-74547568cd-sqv98\" (UID: \"3c4dc6e5-57fe-454f-89e6-7c37768004b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249495 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-encryption-config\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249510 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce327d25-76b0-4c8b-a163-05f4e0976c34-config\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249527 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e75fec-8174-41c0-82b1-a01786d46246-audit-dir\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249558 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0ab7979f-0ea9-471f-a71a-75f869d58f14-tmpfs\") pod \"packageserver-d55dfcdfc-nmzfk\" (UID: \"0ab7979f-0ea9-471f-a71a-75f869d58f14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249583 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f23bd041-73a3-4443-869f-b7d6221d3763-config\") pod \"console-operator-58897d9998-n68j8\" (UID: \"f23bd041-73a3-4443-869f-b7d6221d3763\") " pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249598 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b2ac879-133d-44de-8d0e-df502cc87c55-machine-approver-tls\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249615 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-serving-cert\") pod \"route-controller-manager-6576b87f9c-lhfgj\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249632 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/96d01e99-95dd-4969-ab53-f94c7383886f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r6f6g\" (UID: \"96d01e99-95dd-4969-ab53-f94c7383886f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfpld\" (UniqueName: \"kubernetes.io/projected/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-kube-api-access-xfpld\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249666 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-audit-policies\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249681 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gmcc\" (UniqueName: \"kubernetes.io/projected/d1bc185a-fac5-4103-947a-d3d660802249-kube-api-access-5gmcc\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249697 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-service-ca\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249711 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-trusted-ca-bundle\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249727 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzpjx\" (UniqueName: \"kubernetes.io/projected/b9cd00db-0b78-4c09-8063-2c2bd201fe57-kube-api-access-bzpjx\") pod \"downloads-7954f5f757-tw7nf\" (UID: \"b9cd00db-0b78-4c09-8063-2c2bd201fe57\") " pod="openshift-console/downloads-7954f5f757-tw7nf" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249742 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-serving-cert\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249760 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249776 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb74z\" (UniqueName: \"kubernetes.io/projected/4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9-kube-api-access-qb74z\") pod \"openshift-apiserver-operator-796bbdcf4f-5zmrn\" (UID: \"4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249788 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-config\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249795 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b214f93-e9ab-4500-9c6b-6319c5570459-images\") pod \"machine-api-operator-5694c8668f-gcvsv\" (UID: \"8b214f93-e9ab-4500-9c6b-6319c5570459\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249842 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bmlm\" (UniqueName: \"kubernetes.io/projected/60021a99-658a-4bde-81c3-dae4f4870628-kube-api-access-6bmlm\") pod \"cluster-image-registry-operator-dc59b4c8b-49ckx\" (UID: \"60021a99-658a-4bde-81c3-dae4f4870628\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249874 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249881 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60021a99-658a-4bde-81c3-dae4f4870628-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-49ckx\" (UID: \"60021a99-658a-4bde-81c3-dae4f4870628\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqscq\" (UniqueName: \"kubernetes.io/projected/f23bd041-73a3-4443-869f-b7d6221d3763-kube-api-access-bqscq\") pod \"console-operator-58897d9998-n68j8\" (UID: \"f23bd041-73a3-4443-869f-b7d6221d3763\") " pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249973 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-config\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.250003 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c52abcc0-4f0f-4094-9cbb-3bbad9978f53-config\") pod \"kube-apiserver-operator-766d6c64bb-xzjgl\" (UID: \"c52abcc0-4f0f-4094-9cbb-3bbad9978f53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.250029 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48194ca4-ce47-4309-a085-339c9b14f42b-config\") pod \"kube-controller-manager-operator-78b949d7b-nbpnv\" (UID: \"48194ca4-ce47-4309-a085-339c9b14f42b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.250036 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-client-ca\") pod \"route-controller-manager-6576b87f9c-lhfgj\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249332 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.250052 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c4dc6e5-57fe-454f-89e6-7c37768004b4-proxy-tls\") pod \"machine-config-operator-74547568cd-sqv98\" (UID: \"3c4dc6e5-57fe-454f-89e6-7c37768004b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.250095 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a423478-8008-4169-a257-ee5b0701c460-serving-cert\") pod \"service-ca-operator-777779d784-62qg2\" (UID: \"4a423478-8008-4169-a257-ee5b0701c460\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.250119 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ecf4e5af-74f0-44c8-9231-0719fa4d0f16-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lwnml\" (UID: \"ecf4e5af-74f0-44c8-9231-0719fa4d0f16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.250367 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.250455 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b214f93-e9ab-4500-9c6b-6319c5570459-images\") pod \"machine-api-operator-5694c8668f-gcvsv\" (UID: \"8b214f93-e9ab-4500-9c6b-6319c5570459\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.250504 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21e75fec-8174-41c0-82b1-a01786d46246-audit-dir\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.250648 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b214f93-e9ab-4500-9c6b-6319c5570459-config\") pod \"machine-api-operator-5694c8668f-gcvsv\" (UID: \"8b214f93-e9ab-4500-9c6b-6319c5570459\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.249436 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a07eda47-4b27-4396-90a1-a6a1569a6f99-audit-dir\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.251099 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2ac879-133d-44de-8d0e-df502cc87c55-config\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.251176 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f23bd041-73a3-4443-869f-b7d6221d3763-config\") pod \"console-operator-58897d9998-n68j8\" (UID: \"f23bd041-73a3-4443-869f-b7d6221d3763\") " pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.251375 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.251601 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d2rth"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.251725 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b214f93-e9ab-4500-9c6b-6319c5570459-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gcvsv\" (UID: \"8b214f93-e9ab-4500-9c6b-6319c5570459\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.251993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-service-ca-bundle\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.253099 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/96d01e99-95dd-4969-ab53-f94c7383886f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r6f6g\" (UID: \"96d01e99-95dd-4969-ab53-f94c7383886f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.253984 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.254376 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-serving-cert\") pod \"route-controller-manager-6576b87f9c-lhfgj\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.255215 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.256589 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f23bd041-73a3-4443-869f-b7d6221d3763-serving-cert\") pod \"console-operator-58897d9998-n68j8\" (UID: \"f23bd041-73a3-4443-869f-b7d6221d3763\") " pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.256826 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.257569 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-62qg2"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.259481 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-766fs"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.270330 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-serving-cert\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.270565 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96d01e99-95dd-4969-ab53-f94c7383886f-serving-cert\") pod \"openshift-config-operator-7777fb866f-r6f6g\" (UID: \"96d01e99-95dd-4969-ab53-f94c7383886f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.273005 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.274051 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.282071 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.285820 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.287865 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.289544 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.290454 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gcvsv"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.291950 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-stxqv"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.297267 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.298370 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.299423 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9chqb"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.300455 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fw5rj"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.301893 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g2thm"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.302466 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.302729 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.303878 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pqqsl"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.304733 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.305713 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.307795 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-n68j8"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.308485 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.309793 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.311060 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fw5rj"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.312057 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.313057 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wr2lk"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.314264 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9nf4f"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.315317 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-zknlt"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.316073 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zknlt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.316729 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-t8dcw"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.318047 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.318920 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zknlt"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.320088 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-t8dcw"] Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.328338 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.349353 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.350937 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0ab7979f-0ea9-471f-a71a-75f869d58f14-tmpfs\") pod \"packageserver-d55dfcdfc-nmzfk\" (UID: \"0ab7979f-0ea9-471f-a71a-75f869d58f14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351026 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzpjx\" (UniqueName: \"kubernetes.io/projected/b9cd00db-0b78-4c09-8063-2c2bd201fe57-kube-api-access-bzpjx\") pod \"downloads-7954f5f757-tw7nf\" (UID: \"b9cd00db-0b78-4c09-8063-2c2bd201fe57\") " pod="openshift-console/downloads-7954f5f757-tw7nf" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351056 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-serving-cert\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351078 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-service-ca\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351099 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-trusted-ca-bundle\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351167 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb74z\" (UniqueName: \"kubernetes.io/projected/4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9-kube-api-access-qb74z\") pod \"openshift-apiserver-operator-796bbdcf4f-5zmrn\" (UID: \"4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351233 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bmlm\" (UniqueName: \"kubernetes.io/projected/60021a99-658a-4bde-81c3-dae4f4870628-kube-api-access-6bmlm\") pod \"cluster-image-registry-operator-dc59b4c8b-49ckx\" (UID: \"60021a99-658a-4bde-81c3-dae4f4870628\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351267 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60021a99-658a-4bde-81c3-dae4f4870628-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-49ckx\" (UID: \"60021a99-658a-4bde-81c3-dae4f4870628\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351450 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c52abcc0-4f0f-4094-9cbb-3bbad9978f53-config\") pod \"kube-apiserver-operator-766d6c64bb-xzjgl\" (UID: \"c52abcc0-4f0f-4094-9cbb-3bbad9978f53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351515 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48194ca4-ce47-4309-a085-339c9b14f42b-config\") pod \"kube-controller-manager-operator-78b949d7b-nbpnv\" (UID: \"48194ca4-ce47-4309-a085-339c9b14f42b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351545 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c4dc6e5-57fe-454f-89e6-7c37768004b4-proxy-tls\") pod \"machine-config-operator-74547568cd-sqv98\" (UID: \"3c4dc6e5-57fe-454f-89e6-7c37768004b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351591 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a423478-8008-4169-a257-ee5b0701c460-serving-cert\") pod \"service-ca-operator-777779d784-62qg2\" (UID: \"4a423478-8008-4169-a257-ee5b0701c460\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351620 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ecf4e5af-74f0-44c8-9231-0719fa4d0f16-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lwnml\" (UID: \"ecf4e5af-74f0-44c8-9231-0719fa4d0f16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351646 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a423478-8008-4169-a257-ee5b0701c460-config\") pod \"service-ca-operator-777779d784-62qg2\" (UID: \"4a423478-8008-4169-a257-ee5b0701c460\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351681 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0ab7979f-0ea9-471f-a71a-75f869d58f14-tmpfs\") pod \"packageserver-d55dfcdfc-nmzfk\" (UID: \"0ab7979f-0ea9-471f-a71a-75f869d58f14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351703 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48194ca4-ce47-4309-a085-339c9b14f42b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nbpnv\" (UID: \"48194ca4-ce47-4309-a085-339c9b14f42b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351745 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5zmrn\" (UID: \"4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351817 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ab7979f-0ea9-471f-a71a-75f869d58f14-webhook-cert\") pod \"packageserver-d55dfcdfc-nmzfk\" (UID: \"0ab7979f-0ea9-471f-a71a-75f869d58f14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.351851 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j647d\" (UniqueName: \"kubernetes.io/projected/ce327d25-76b0-4c8b-a163-05f4e0976c34-kube-api-access-j647d\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.352197 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-service-ca\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.352610 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60021a99-658a-4bde-81c3-dae4f4870628-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-49ckx\" (UID: \"60021a99-658a-4bde-81c3-dae4f4870628\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.352625 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-trusted-ca-bundle\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.352812 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ecf4e5af-74f0-44c8-9231-0719fa4d0f16-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lwnml\" (UID: \"ecf4e5af-74f0-44c8-9231-0719fa4d0f16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.352850 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/361acab0-1cd6-48fc-b6ef-c77dc3092f98-metrics-tls\") pod \"dns-operator-744455d44c-766fs\" (UID: \"361acab0-1cd6-48fc-b6ef-c77dc3092f98\") " pod="openshift-dns-operator/dns-operator-744455d44c-766fs" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.352877 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c52abcc0-4f0f-4094-9cbb-3bbad9978f53-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xzjgl\" (UID: \"c52abcc0-4f0f-4094-9cbb-3bbad9978f53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.352901 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c4dc6e5-57fe-454f-89e6-7c37768004b4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sqv98\" (UID: \"3c4dc6e5-57fe-454f-89e6-7c37768004b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.352921 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsw24\" (UniqueName: \"kubernetes.io/projected/a9d3c038-c11e-4925-8802-5c8b57b1aeef-kube-api-access-gsw24\") pod \"openshift-controller-manager-operator-756b6f6bc6-hvs6v\" (UID: \"a9d3c038-c11e-4925-8802-5c8b57b1aeef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.352968 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ce327d25-76b0-4c8b-a163-05f4e0976c34-etcd-client\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.353010 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4de5b92-e3b3-480f-9241-23e3603eaff2-srv-cert\") pod \"catalog-operator-68c6474976-x48rh\" (UID: \"d4de5b92-e3b3-480f-9241-23e3603eaff2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.353035 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d4de5b92-e3b3-480f-9241-23e3603eaff2-profile-collector-cert\") pod \"catalog-operator-68c6474976-x48rh\" (UID: \"d4de5b92-e3b3-480f-9241-23e3603eaff2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.353059 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ce327d25-76b0-4c8b-a163-05f4e0976c34-etcd-ca\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.353083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce327d25-76b0-4c8b-a163-05f4e0976c34-serving-cert\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.353115 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67gpf\" (UniqueName: \"kubernetes.io/projected/4a423478-8008-4169-a257-ee5b0701c460-kube-api-access-67gpf\") pod \"service-ca-operator-777779d784-62qg2\" (UID: \"4a423478-8008-4169-a257-ee5b0701c460\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.353141 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3ac11867-dffb-4aa1-88ba-d607d5d6f97a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-stxqv\" (UID: \"3ac11867-dffb-4aa1-88ba-d607d5d6f97a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-stxqv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.353488 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d3c038-c11e-4925-8802-5c8b57b1aeef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hvs6v\" (UID: \"a9d3c038-c11e-4925-8802-5c8b57b1aeef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.353713 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c4dc6e5-57fe-454f-89e6-7c37768004b4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sqv98\" (UID: \"3c4dc6e5-57fe-454f-89e6-7c37768004b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.354311 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-serving-cert\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.354481 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ce327d25-76b0-4c8b-a163-05f4e0976c34-etcd-ca\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.354750 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5zmrn\" (UID: \"4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355299 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/361acab0-1cd6-48fc-b6ef-c77dc3092f98-metrics-tls\") pod \"dns-operator-744455d44c-766fs\" (UID: \"361acab0-1cd6-48fc-b6ef-c77dc3092f98\") " pod="openshift-dns-operator/dns-operator-744455d44c-766fs" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355461 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7fg2\" (UniqueName: \"kubernetes.io/projected/d4de5b92-e3b3-480f-9241-23e3603eaff2-kube-api-access-k7fg2\") pod \"catalog-operator-68c6474976-x48rh\" (UID: \"d4de5b92-e3b3-480f-9241-23e3603eaff2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355584 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecf4e5af-74f0-44c8-9231-0719fa4d0f16-proxy-tls\") pod \"machine-config-controller-84d6567774-lwnml\" (UID: \"ecf4e5af-74f0-44c8-9231-0719fa4d0f16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355615 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hdrv\" (UniqueName: \"kubernetes.io/projected/35c827cd-26e4-4f7a-ba65-bb717839a8d4-kube-api-access-7hdrv\") pod \"kube-storage-version-migrator-operator-b67b599dd-c7tjz\" (UID: \"35c827cd-26e4-4f7a-ba65-bb717839a8d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355644 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr8xx\" (UniqueName: \"kubernetes.io/projected/ecf4e5af-74f0-44c8-9231-0719fa4d0f16-kube-api-access-sr8xx\") pod \"machine-config-controller-84d6567774-lwnml\" (UID: \"ecf4e5af-74f0-44c8-9231-0719fa4d0f16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355668 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrmt8\" (UniqueName: \"kubernetes.io/projected/50fb7e5f-0fc6-47d2-a953-8fece3489792-kube-api-access-qrmt8\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355693 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5zmrn\" (UID: \"4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355719 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jnlc\" (UniqueName: \"kubernetes.io/projected/e57a2b10-8b23-4085-a031-3263b4265ccc-kube-api-access-2jnlc\") pod \"cluster-samples-operator-665b6dd947-k95wg\" (UID: \"e57a2b10-8b23-4085-a031-3263b4265ccc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355742 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48194ca4-ce47-4309-a085-339c9b14f42b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nbpnv\" (UID: \"48194ca4-ce47-4309-a085-339c9b14f42b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355797 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/60021a99-658a-4bde-81c3-dae4f4870628-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-49ckx\" (UID: \"60021a99-658a-4bde-81c3-dae4f4870628\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355826 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7zfl\" (UniqueName: \"kubernetes.io/projected/3ac11867-dffb-4aa1-88ba-d607d5d6f97a-kube-api-access-z7zfl\") pod \"multus-admission-controller-857f4d67dd-stxqv\" (UID: \"3ac11867-dffb-4aa1-88ba-d607d5d6f97a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-stxqv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355876 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c4dc6e5-57fe-454f-89e6-7c37768004b4-images\") pod \"machine-config-operator-74547568cd-sqv98\" (UID: \"3c4dc6e5-57fe-454f-89e6-7c37768004b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355903 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr9fp\" (UniqueName: \"kubernetes.io/projected/0ab7979f-0ea9-471f-a71a-75f869d58f14-kube-api-access-wr9fp\") pod \"packageserver-d55dfcdfc-nmzfk\" (UID: \"0ab7979f-0ea9-471f-a71a-75f869d58f14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355928 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c52abcc0-4f0f-4094-9cbb-3bbad9978f53-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xzjgl\" (UID: \"c52abcc0-4f0f-4094-9cbb-3bbad9978f53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355955 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kskcm\" (UniqueName: \"kubernetes.io/projected/d35e6baa-6315-48ee-904c-05da7d436283-kube-api-access-kskcm\") pod \"control-plane-machine-set-operator-78cbb6b69f-zzf4r\" (UID: \"d35e6baa-6315-48ee-904c-05da7d436283\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.355901 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ce327d25-76b0-4c8b-a163-05f4e0976c34-etcd-client\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d35e6baa-6315-48ee-904c-05da7d436283-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zzf4r\" (UID: \"d35e6baa-6315-48ee-904c-05da7d436283\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356037 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c827cd-26e4-4f7a-ba65-bb717839a8d4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-c7tjz\" (UID: \"35c827cd-26e4-4f7a-ba65-bb717839a8d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356062 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-oauth-config\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356086 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-config\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356110 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/60021a99-658a-4bde-81c3-dae4f4870628-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-49ckx\" (UID: \"60021a99-658a-4bde-81c3-dae4f4870628\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356156 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e57a2b10-8b23-4085-a031-3263b4265ccc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-k95wg\" (UID: \"e57a2b10-8b23-4085-a031-3263b4265ccc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356198 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0ab7979f-0ea9-471f-a71a-75f869d58f14-apiservice-cert\") pod \"packageserver-d55dfcdfc-nmzfk\" (UID: \"0ab7979f-0ea9-471f-a71a-75f869d58f14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356242 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35c827cd-26e4-4f7a-ba65-bb717839a8d4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-c7tjz\" (UID: \"35c827cd-26e4-4f7a-ba65-bb717839a8d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356287 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5zmrn\" (UID: \"4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356296 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce327d25-76b0-4c8b-a163-05f4e0976c34-etcd-service-ca\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356342 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d3c038-c11e-4925-8802-5c8b57b1aeef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hvs6v\" (UID: \"a9d3c038-c11e-4925-8802-5c8b57b1aeef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356370 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqzwt\" (UniqueName: \"kubernetes.io/projected/361acab0-1cd6-48fc-b6ef-c77dc3092f98-kube-api-access-qqzwt\") pod \"dns-operator-744455d44c-766fs\" (UID: \"361acab0-1cd6-48fc-b6ef-c77dc3092f98\") " pod="openshift-dns-operator/dns-operator-744455d44c-766fs" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356415 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-oauth-serving-cert\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356453 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wwqz\" (UniqueName: \"kubernetes.io/projected/3c4dc6e5-57fe-454f-89e6-7c37768004b4-kube-api-access-2wwqz\") pod \"machine-config-operator-74547568cd-sqv98\" (UID: \"3c4dc6e5-57fe-454f-89e6-7c37768004b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356481 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce327d25-76b0-4c8b-a163-05f4e0976c34-config\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.356928 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-config\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.357057 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce327d25-76b0-4c8b-a163-05f4e0976c34-config\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.357603 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-oauth-serving-cert\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.358174 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/60021a99-658a-4bde-81c3-dae4f4870628-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-49ckx\" (UID: \"60021a99-658a-4bde-81c3-dae4f4870628\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.359492 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e57a2b10-8b23-4085-a031-3263b4265ccc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-k95wg\" (UID: \"e57a2b10-8b23-4085-a031-3263b4265ccc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.359738 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-oauth-config\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.361268 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce327d25-76b0-4c8b-a163-05f4e0976c34-serving-cert\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.368703 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.377565 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce327d25-76b0-4c8b-a163-05f4e0976c34-etcd-service-ca\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.388819 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.408692 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.420267 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c52abcc0-4f0f-4094-9cbb-3bbad9978f53-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xzjgl\" (UID: \"c52abcc0-4f0f-4094-9cbb-3bbad9978f53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.428739 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.448553 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.469175 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.473006 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c52abcc0-4f0f-4094-9cbb-3bbad9978f53-config\") pod \"kube-apiserver-operator-766d6c64bb-xzjgl\" (UID: \"c52abcc0-4f0f-4094-9cbb-3bbad9978f53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.488485 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.509480 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.533855 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.545792 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48194ca4-ce47-4309-a085-339c9b14f42b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nbpnv\" (UID: \"48194ca4-ce47-4309-a085-339c9b14f42b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.549129 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.569323 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.572703 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48194ca4-ce47-4309-a085-339c9b14f42b-config\") pod \"kube-controller-manager-operator-78b949d7b-nbpnv\" (UID: \"48194ca4-ce47-4309-a085-339c9b14f42b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.590329 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.609492 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.628960 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.648550 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.654745 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9d3c038-c11e-4925-8802-5c8b57b1aeef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hvs6v\" (UID: \"a9d3c038-c11e-4925-8802-5c8b57b1aeef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.669418 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.688866 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.700522 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9d3c038-c11e-4925-8802-5c8b57b1aeef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hvs6v\" (UID: \"a9d3c038-c11e-4925-8802-5c8b57b1aeef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.708497 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.729084 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.769759 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.780138 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d35e6baa-6315-48ee-904c-05da7d436283-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zzf4r\" (UID: \"d35e6baa-6315-48ee-904c-05da7d436283\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.788974 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.797945 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3ac11867-dffb-4aa1-88ba-d607d5d6f97a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-stxqv\" (UID: \"3ac11867-dffb-4aa1-88ba-d607d5d6f97a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-stxqv" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.808711 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.829311 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.849097 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.869833 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.876236 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ab7979f-0ea9-471f-a71a-75f869d58f14-webhook-cert\") pod \"packageserver-d55dfcdfc-nmzfk\" (UID: \"0ab7979f-0ea9-471f-a71a-75f869d58f14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.880038 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0ab7979f-0ea9-471f-a71a-75f869d58f14-apiservice-cert\") pod \"packageserver-d55dfcdfc-nmzfk\" (UID: \"0ab7979f-0ea9-471f-a71a-75f869d58f14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.888696 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.909198 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.929004 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.936939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3c4dc6e5-57fe-454f-89e6-7c37768004b4-images\") pod \"machine-config-operator-74547568cd-sqv98\" (UID: \"3c4dc6e5-57fe-454f-89e6-7c37768004b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.949671 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.956454 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d4de5b92-e3b3-480f-9241-23e3603eaff2-profile-collector-cert\") pod \"catalog-operator-68c6474976-x48rh\" (UID: \"d4de5b92-e3b3-480f-9241-23e3603eaff2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.969052 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.976435 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d4de5b92-e3b3-480f-9241-23e3603eaff2-srv-cert\") pod \"catalog-operator-68c6474976-x48rh\" (UID: \"d4de5b92-e3b3-480f-9241-23e3603eaff2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:19 crc kubenswrapper[4832]: I1204 06:11:19.988618 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.009348 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.016574 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c4dc6e5-57fe-454f-89e6-7c37768004b4-proxy-tls\") pod \"machine-config-operator-74547568cd-sqv98\" (UID: \"3c4dc6e5-57fe-454f-89e6-7c37768004b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.029521 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.049812 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.069792 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.075097 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a423478-8008-4169-a257-ee5b0701c460-serving-cert\") pod \"service-ca-operator-777779d784-62qg2\" (UID: \"4a423478-8008-4169-a257-ee5b0701c460\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.088780 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.092931 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a423478-8008-4169-a257-ee5b0701c460-config\") pod \"service-ca-operator-777779d784-62qg2\" (UID: \"4a423478-8008-4169-a257-ee5b0701c460\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.109790 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.129152 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.140029 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecf4e5af-74f0-44c8-9231-0719fa4d0f16-proxy-tls\") pod \"machine-config-controller-84d6567774-lwnml\" (UID: \"ecf4e5af-74f0-44c8-9231-0719fa4d0f16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.150529 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.169037 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.190942 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.209439 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.221121 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35c827cd-26e4-4f7a-ba65-bb717839a8d4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-c7tjz\" (UID: \"35c827cd-26e4-4f7a-ba65-bb717839a8d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.228974 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.237306 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35c827cd-26e4-4f7a-ba65-bb717839a8d4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-c7tjz\" (UID: \"35c827cd-26e4-4f7a-ba65-bb717839a8d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245222 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245300 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-provider-selection podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745276191 +0000 UTC m=+136.358093907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-provider-selection") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245466 4832 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245515 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245536 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e212703-f85d-4128-bbff-a3057263d6d3-serving-cert podName:3e212703-f85d-4128-bbff-a3057263d6d3 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745517097 +0000 UTC m=+136.358334813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3e212703-f85d-4128-bbff-a3057263d6d3-serving-cert") pod "controller-manager-879f6c89f-cpzbl" (UID: "3e212703-f85d-4128-bbff-a3057263d6d3") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245599 4832 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245610 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-login podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745579248 +0000 UTC m=+136.358397004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-login") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245612 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245644 4832 configmap.go:193] Couldn't get configMap openshift-authentication/audit: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245660 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245647 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-session: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245651 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-config podName:a68efa85-b0ea-4db2-9b73-0dc87b2c8328 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745633689 +0000 UTC m=+136.358451405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-config") pod "authentication-operator-69f744f599-pqfp2" (UID: "a68efa85-b0ea-4db2-9b73-0dc87b2c8328") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245733 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-router-certs podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745718823 +0000 UTC m=+136.358536569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-router-certs") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245749 4832 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245754 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-audit-policies podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745742963 +0000 UTC m=+136.358560709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-audit-policies") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245769 4832 secret.go:188] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245777 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-idp-0-file-data podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745766894 +0000 UTC m=+136.358584640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245785 4832 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245801 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-session podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745790354 +0000 UTC m=+136.358608100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-session") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245809 4832 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245822 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-audit podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745811195 +0000 UTC m=+136.358628941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-audit") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245837 4832 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245848 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-image-import-ca podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745832275 +0000 UTC m=+136.358649991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-image-import-ca") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245866 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-serving-cert podName:a07eda47-4b27-4396-90a1-a6a1569a6f99 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745858756 +0000 UTC m=+136.358676472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-serving-cert") pod "apiserver-7bbb656c7d-9s7hj" (UID: "a07eda47-4b27-4396-90a1-a6a1569a6f99") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245878 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-etcd-serving-ca podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745872216 +0000 UTC m=+136.358689932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-etcd-serving-ca") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.245891 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-trusted-ca-bundle podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.745886127 +0000 UTC m=+136.358703843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-trusted-ca-bundle") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.247927 4832 request.go:700] Waited for 1.020384978s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.248035 4832 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.248105 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-serving-cert podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.7480842 +0000 UTC m=+136.360901946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-serving-cert") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.248191 4832 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.248226 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-cliconfig podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.748215563 +0000 UTC m=+136.361033279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-cliconfig") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.248246 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.248268 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-serving-cert podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.748261414 +0000 UTC m=+136.361079130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-serving-cert") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.248282 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.248307 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-ocp-branding-template podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.748298745 +0000 UTC m=+136.361116461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.249562 4832 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.249634 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-etcd-client podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.749616768 +0000 UTC m=+136.362434514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-etcd-client") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.249633 4832 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.249653 4832 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.249683 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-service-ca podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.749671749 +0000 UTC m=+136.362489495 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-service-ca") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.249692 4832 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.249708 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-trusted-ca-bundle podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.74969555 +0000 UTC m=+136.362513306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.249731 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-serving-ca podName:a07eda47-4b27-4396-90a1-a6a1569a6f99 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.74972133 +0000 UTC m=+136.362539076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-serving-ca") pod "apiserver-7bbb656c7d-9s7hj" (UID: "a07eda47-4b27-4396-90a1-a6a1569a6f99") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.249808 4832 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.249862 4832 secret.go:188] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.249886 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-client-ca podName:3e212703-f85d-4128-bbff-a3057263d6d3 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.749863484 +0000 UTC m=+136.362681240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-client-ca") pod "controller-manager-879f6c89f-cpzbl" (UID: "3e212703-f85d-4128-bbff-a3057263d6d3") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.249919 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-client podName:a07eda47-4b27-4396-90a1-a6a1569a6f99 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.749903325 +0000 UTC m=+136.362721081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-client") pod "apiserver-7bbb656c7d-9s7hj" (UID: "a07eda47-4b27-4396-90a1-a6a1569a6f99") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.250066 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.250071 4832 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.250123 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-encryption-config podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.750108581 +0000 UTC m=+136.362926327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-encryption-config") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.250243 4832 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.250275 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-config podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.750263984 +0000 UTC m=+136.363081700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-config") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.250298 4832 secret.go:188] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.250328 4832 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.250346 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-encryption-config podName:a07eda47-4b27-4396-90a1-a6a1569a6f99 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.750331476 +0000 UTC m=+136.363149192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-encryption-config") pod "apiserver-7bbb656c7d-9s7hj" (UID: "a07eda47-4b27-4396-90a1-a6a1569a6f99") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.250376 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b2ac879-133d-44de-8d0e-df502cc87c55-auth-proxy-config podName:5b2ac879-133d-44de-8d0e-df502cc87c55 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.750360707 +0000 UTC m=+136.363178453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/5b2ac879-133d-44de-8d0e-df502cc87c55-auth-proxy-config") pod "machine-approver-56656f9798-8lgmq" (UID: "5b2ac879-133d-44de-8d0e-df502cc87c55") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.250967 4832 secret.go:188] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.251016 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b2ac879-133d-44de-8d0e-df502cc87c55-machine-approver-tls podName:5b2ac879-133d-44de-8d0e-df502cc87c55 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.751005132 +0000 UTC m=+136.363822848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/5b2ac879-133d-44de-8d0e-df502cc87c55-machine-approver-tls") pod "machine-approver-56656f9798-8lgmq" (UID: "5b2ac879-133d-44de-8d0e-df502cc87c55") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.252082 4832 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.252128 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-audit-policies podName:a07eda47-4b27-4396-90a1-a6a1569a6f99 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.75211773 +0000 UTC m=+136.364935446 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-audit-policies") pod "apiserver-7bbb656c7d-9s7hj" (UID: "a07eda47-4b27-4396-90a1-a6a1569a6f99") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.252146 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: E1204 06:11:20.252172 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-error podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:20.752164651 +0000 UTC m=+136.364982367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-error") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.289154 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.308699 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.328753 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.348907 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.368662 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.388870 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.409232 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.428903 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.449938 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.468913 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.489366 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.509086 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.535269 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.549150 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.569079 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.589157 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.609414 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.629653 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.649570 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.669217 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.689858 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.709011 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.729462 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.748849 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.769693 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.780431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-etcd-client\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.780608 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.780667 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-client-ca\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.780718 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.780770 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-client\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.780845 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b2ac879-133d-44de-8d0e-df502cc87c55-auth-proxy-config\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.780956 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-encryption-config\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.781046 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-encryption-config\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.781135 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b2ac879-133d-44de-8d0e-df502cc87c55-machine-approver-tls\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.781224 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-audit-policies\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.781339 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.781491 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-config\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.781582 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.781640 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.781708 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-audit-policies\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.781756 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.781822 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-config\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.781871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.781942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.782045 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-serving-cert\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.782096 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e212703-f85d-4128-bbff-a3057263d6d3-serving-cert\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.782160 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-etcd-serving-ca\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.782205 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-audit\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.782268 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-image-import-ca\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.782335 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.782549 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-serving-cert\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.782608 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.782657 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.782710 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.782784 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.789732 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.809241 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.838215 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.849338 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.870128 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.890697 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.909531 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.942978 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hmhf\" (UniqueName: \"kubernetes.io/projected/a07eda47-4b27-4396-90a1-a6a1569a6f99-kube-api-access-2hmhf\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:20 crc kubenswrapper[4832]: I1204 06:11:20.970591 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmkk2\" (UniqueName: \"kubernetes.io/projected/96d01e99-95dd-4969-ab53-f94c7383886f-kube-api-access-fmkk2\") pod \"openshift-config-operator-7777fb866f-r6f6g\" (UID: \"96d01e99-95dd-4969-ab53-f94c7383886f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.023700 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-294pt\" (UniqueName: \"kubernetes.io/projected/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-kube-api-access-294pt\") pod \"route-controller-manager-6576b87f9c-lhfgj\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.043251 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqscq\" (UniqueName: \"kubernetes.io/projected/f23bd041-73a3-4443-869f-b7d6221d3763-kube-api-access-bqscq\") pod \"console-operator-58897d9998-n68j8\" (UID: \"f23bd041-73a3-4443-869f-b7d6221d3763\") " pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.082303 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c59nx\" (UniqueName: \"kubernetes.io/projected/8b214f93-e9ab-4500-9c6b-6319c5570459-kube-api-access-c59nx\") pod \"machine-api-operator-5694c8668f-gcvsv\" (UID: \"8b214f93-e9ab-4500-9c6b-6319c5570459\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.090736 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.099115 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.129169 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.131811 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfpld\" (UniqueName: \"kubernetes.io/projected/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-kube-api-access-xfpld\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.150418 4832 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.169649 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.189257 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.209563 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.228923 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.248128 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.249573 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.252193 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-n68j8"] Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.267462 4832 request.go:700] Waited for 1.949188838s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0 Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.268863 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.290251 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.290677 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g"] Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.309663 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 04 06:11:21 crc kubenswrapper[4832]: W1204 06:11:21.313178 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96d01e99_95dd_4969_ab53_f94c7383886f.slice/crio-441c840998c763a15703620f0e0783600fa58d520bb0f7f322cc8b222f6ac382 WatchSource:0}: Error finding container 441c840998c763a15703620f0e0783600fa58d520bb0f7f322cc8b222f6ac382: Status 404 returned error can't find the container with id 441c840998c763a15703620f0e0783600fa58d520bb0f7f322cc8b222f6ac382 Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.322796 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-n68j8" event={"ID":"f23bd041-73a3-4443-869f-b7d6221d3763","Type":"ContainerStarted","Data":"4ece8d81238a9cadb49d3135b39f91c8ce82263772f933a873dc7a6425e73bf1"} Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.347258 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bmlm\" (UniqueName: \"kubernetes.io/projected/60021a99-658a-4bde-81c3-dae4f4870628-kube-api-access-6bmlm\") pod \"cluster-image-registry-operator-dc59b4c8b-49ckx\" (UID: \"60021a99-658a-4bde-81c3-dae4f4870628\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.370230 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb74z\" (UniqueName: \"kubernetes.io/projected/4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9-kube-api-access-qb74z\") pod \"openshift-apiserver-operator-796bbdcf4f-5zmrn\" (UID: \"4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.381314 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.383624 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j647d\" (UniqueName: \"kubernetes.io/projected/ce327d25-76b0-4c8b-a163-05f4e0976c34-kube-api-access-j647d\") pod \"etcd-operator-b45778765-d2rth\" (UID: \"ce327d25-76b0-4c8b-a163-05f4e0976c34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.402209 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj"] Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.403769 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzpjx\" (UniqueName: \"kubernetes.io/projected/b9cd00db-0b78-4c09-8063-2c2bd201fe57-kube-api-access-bzpjx\") pod \"downloads-7954f5f757-tw7nf\" (UID: \"b9cd00db-0b78-4c09-8063-2c2bd201fe57\") " pod="openshift-console/downloads-7954f5f757-tw7nf" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.407421 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tw7nf" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.413720 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.421531 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c52abcc0-4f0f-4094-9cbb-3bbad9978f53-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xzjgl\" (UID: \"c52abcc0-4f0f-4094-9cbb-3bbad9978f53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.441949 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsw24\" (UniqueName: \"kubernetes.io/projected/a9d3c038-c11e-4925-8802-5c8b57b1aeef-kube-api-access-gsw24\") pod \"openshift-controller-manager-operator-756b6f6bc6-hvs6v\" (UID: \"a9d3c038-c11e-4925-8802-5c8b57b1aeef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.462277 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67gpf\" (UniqueName: \"kubernetes.io/projected/4a423478-8008-4169-a257-ee5b0701c460-kube-api-access-67gpf\") pod \"service-ca-operator-777779d784-62qg2\" (UID: \"4a423478-8008-4169-a257-ee5b0701c460\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.475364 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.481234 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7fg2\" (UniqueName: \"kubernetes.io/projected/d4de5b92-e3b3-480f-9241-23e3603eaff2-kube-api-access-k7fg2\") pod \"catalog-operator-68c6474976-x48rh\" (UID: \"d4de5b92-e3b3-480f-9241-23e3603eaff2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.481898 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.502911 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hdrv\" (UniqueName: \"kubernetes.io/projected/35c827cd-26e4-4f7a-ba65-bb717839a8d4-kube-api-access-7hdrv\") pod \"kube-storage-version-migrator-operator-b67b599dd-c7tjz\" (UID: \"35c827cd-26e4-4f7a-ba65-bb717839a8d4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.510224 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.523536 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr8xx\" (UniqueName: \"kubernetes.io/projected/ecf4e5af-74f0-44c8-9231-0719fa4d0f16-kube-api-access-sr8xx\") pod \"machine-config-controller-84d6567774-lwnml\" (UID: \"ecf4e5af-74f0-44c8-9231-0719fa4d0f16\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.548487 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrmt8\" (UniqueName: \"kubernetes.io/projected/50fb7e5f-0fc6-47d2-a953-8fece3489792-kube-api-access-qrmt8\") pod \"console-f9d7485db-g2thm\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.552751 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.559701 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.566165 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.572807 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.588834 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48194ca4-ce47-4309-a085-339c9b14f42b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nbpnv\" (UID: \"48194ca4-ce47-4309-a085-339c9b14f42b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" Dec 04 06:11:21 crc kubenswrapper[4832]: W1204 06:11:21.600178 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd854fc2c_f4d8_4a3f_a9c9_06cc2ace9905.slice/crio-2f7fed5bb81ad6e0660df2f10b9b22a8a6f542b07f24ae6222afee0d5e6aed91 WatchSource:0}: Error finding container 2f7fed5bb81ad6e0660df2f10b9b22a8a6f542b07f24ae6222afee0d5e6aed91: Status 404 returned error can't find the container with id 2f7fed5bb81ad6e0660df2f10b9b22a8a6f542b07f24ae6222afee0d5e6aed91 Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.603344 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7zfl\" (UniqueName: \"kubernetes.io/projected/3ac11867-dffb-4aa1-88ba-d607d5d6f97a-kube-api-access-z7zfl\") pod \"multus-admission-controller-857f4d67dd-stxqv\" (UID: \"3ac11867-dffb-4aa1-88ba-d607d5d6f97a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-stxqv" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.603924 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jnlc\" (UniqueName: \"kubernetes.io/projected/e57a2b10-8b23-4085-a031-3263b4265ccc-kube-api-access-2jnlc\") pod \"cluster-samples-operator-665b6dd947-k95wg\" (UID: \"e57a2b10-8b23-4085-a031-3263b4265ccc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.623870 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr9fp\" (UniqueName: \"kubernetes.io/projected/0ab7979f-0ea9-471f-a71a-75f869d58f14-kube-api-access-wr9fp\") pod \"packageserver-d55dfcdfc-nmzfk\" (UID: \"0ab7979f-0ea9-471f-a71a-75f869d58f14\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.653087 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kskcm\" (UniqueName: \"kubernetes.io/projected/d35e6baa-6315-48ee-904c-05da7d436283-kube-api-access-kskcm\") pod \"control-plane-machine-set-operator-78cbb6b69f-zzf4r\" (UID: \"d35e6baa-6315-48ee-904c-05da7d436283\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.668375 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/60021a99-658a-4bde-81c3-dae4f4870628-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-49ckx\" (UID: \"60021a99-658a-4bde-81c3-dae4f4870628\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.689083 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wwqz\" (UniqueName: \"kubernetes.io/projected/3c4dc6e5-57fe-454f-89e6-7c37768004b4-kube-api-access-2wwqz\") pod \"machine-config-operator-74547568cd-sqv98\" (UID: \"3c4dc6e5-57fe-454f-89e6-7c37768004b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.706030 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqzwt\" (UniqueName: \"kubernetes.io/projected/361acab0-1cd6-48fc-b6ef-c77dc3092f98-kube-api-access-qqzwt\") pod \"dns-operator-744455d44c-766fs\" (UID: \"361acab0-1cd6-48fc-b6ef-c77dc3092f98\") " pod="openshift-dns-operator/dns-operator-744455d44c-766fs" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.720510 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.728308 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.730567 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.732976 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a68efa85-b0ea-4db2-9b73-0dc87b2c8328-config\") pod \"authentication-operator-69f744f599-pqfp2\" (UID: \"a68efa85-b0ea-4db2-9b73-0dc87b2c8328\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.742844 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-766fs" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.750063 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.750338 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.753662 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-audit-policies\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783236 4832 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783255 4832 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783274 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783307 4832 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783315 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-cliconfig podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783294676 +0000 UTC m=+138.396112382 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-cliconfig") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783333 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-etcd-client podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783325627 +0000 UTC m=+138.396143333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-etcd-client") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783337 4832 secret.go:188] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783345 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b2ac879-133d-44de-8d0e-df502cc87c55-auth-proxy-config podName:5b2ac879-133d-44de-8d0e-df502cc87c55 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783339067 +0000 UTC m=+138.396156773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/5b2ac879-133d-44de-8d0e-df502cc87c55-auth-proxy-config") pod "machine-approver-56656f9798-8lgmq" (UID: "5b2ac879-133d-44de-8d0e-df502cc87c55") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783362 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783368 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b2ac879-133d-44de-8d0e-df502cc87c55-machine-approver-tls podName:5b2ac879-133d-44de-8d0e-df502cc87c55 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783356108 +0000 UTC m=+138.396173814 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/5b2ac879-133d-44de-8d0e-df502cc87c55-machine-approver-tls") pod "machine-approver-56656f9798-8lgmq" (UID: "5b2ac879-133d-44de-8d0e-df502cc87c55") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783383 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783410 4832 secret.go:188] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783384 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-login podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783378288 +0000 UTC m=+138.396195994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-login") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783434 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-error podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783425809 +0000 UTC m=+138.396243515 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-error") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783438 4832 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783447 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-serving-cert podName:a07eda47-4b27-4396-90a1-a6a1569a6f99 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.78344046 +0000 UTC m=+138.396258166 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-serving-cert") pod "apiserver-7bbb656c7d-9s7hj" (UID: "a07eda47-4b27-4396-90a1-a6a1569a6f99") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783461 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783462 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e212703-f85d-4128-bbff-a3057263d6d3-serving-cert podName:3e212703-f85d-4128-bbff-a3057263d6d3 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.78345385 +0000 UTC m=+138.396271556 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3e212703-f85d-4128-bbff-a3057263d6d3-serving-cert") pod "controller-manager-879f6c89f-cpzbl" (UID: "3e212703-f85d-4128-bbff-a3057263d6d3") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783481 4832 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783491 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-router-certs podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783484091 +0000 UTC m=+138.396301807 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-router-certs") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783502 4832 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783513 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-session: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783544 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783558 4832 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783502 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-image-import-ca podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783496921 +0000 UTC m=+138.396314627 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-image-import-ca") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783243 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783579 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-etcd-serving-ca podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783571783 +0000 UTC m=+138.396389479 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-etcd-serving-ca") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783590 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-serving-cert podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783584323 +0000 UTC m=+138.396402039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-serving-cert") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783602 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-session podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783595863 +0000 UTC m=+138.396413569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-session") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783614 4832 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783616 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-provider-selection podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783608114 +0000 UTC m=+138.396425820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-provider-selection") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783659 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-ocp-branding-template podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783642454 +0000 UTC m=+138.396460160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783664 4832 secret.go:188] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783674 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-client-ca podName:3e212703-f85d-4128-bbff-a3057263d6d3 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783667685 +0000 UTC m=+138.396485391 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-client-ca") pod "controller-manager-879f6c89f-cpzbl" (UID: "3e212703-f85d-4128-bbff-a3057263d6d3") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783687 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-client podName:a07eda47-4b27-4396-90a1-a6a1569a6f99 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783680455 +0000 UTC m=+138.396498161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-client") pod "apiserver-7bbb656c7d-9s7hj" (UID: "a07eda47-4b27-4396-90a1-a6a1569a6f99") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783693 4832 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783707 4832 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783718 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-config podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783711886 +0000 UTC m=+138.396529592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-config") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783733 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-serving-ca podName:a07eda47-4b27-4396-90a1-a6a1569a6f99 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783724266 +0000 UTC m=+138.396541982 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-serving-ca") pod "apiserver-7bbb656c7d-9s7hj" (UID: "a07eda47-4b27-4396-90a1-a6a1569a6f99") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783747 4832 secret.go:188] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783753 4832 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783770 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-encryption-config podName:a07eda47-4b27-4396-90a1-a6a1569a6f99 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783763427 +0000 UTC m=+138.396581133 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-encryption-config") pod "apiserver-7bbb656c7d-9s7hj" (UID: "a07eda47-4b27-4396-90a1-a6a1569a6f99") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783784 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-audit-policies podName:a07eda47-4b27-4396-90a1-a6a1569a6f99 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783774778 +0000 UTC m=+138.396592484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-audit-policies") pod "apiserver-7bbb656c7d-9s7hj" (UID: "a07eda47-4b27-4396-90a1-a6a1569a6f99") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783830 4832 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783883 4832 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783914 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-trusted-ca-bundle podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783905571 +0000 UTC m=+138.396723277 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783935 4832 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783960 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-encryption-config podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.783952732 +0000 UTC m=+138.396770438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-encryption-config") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.783990 4832 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.784015 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-service-ca podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.784007223 +0000 UTC m=+138.396824929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-service-ca") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.784041 4832 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.784064 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-audit podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.784057344 +0000 UTC m=+138.396875050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-audit") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.784091 4832 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.784114 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-trusted-ca-bundle podName:21e75fec-8174-41c0-82b1-a01786d46246 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.784106946 +0000 UTC m=+138.396924652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-trusted-ca-bundle") pod "apiserver-76f77b778f-wtnbm" (UID: "21e75fec-8174-41c0-82b1-a01786d46246") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.784148 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-serving-cert podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.784138166 +0000 UTC m=+138.396955862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-serving-cert") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.784415 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.784876 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-idp-0-file-data podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.784813953 +0000 UTC m=+138.397631659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync secret cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.789889 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.791248 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.811042 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.817536 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.823433 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gcvsv"] Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.831305 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-stxqv" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.838886 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.841200 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.846125 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.851996 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.870297 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.890467 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.911385 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn"] Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.919519 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.930243 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.934217 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4rlf\" (UniqueName: \"kubernetes.io/projected/21e75fec-8174-41c0-82b1-a01786d46246-kube-api-access-r4rlf\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.949313 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 04 06:11:21 crc kubenswrapper[4832]: W1204 06:11:21.950001 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b214f93_e9ab_4500_9c6b_6319c5570459.slice/crio-5157ce04d3bbfc0ce24d4616ebb86511fa99d42ce33071ec64f019eee64b9b85 WatchSource:0}: Error finding container 5157ce04d3bbfc0ce24d4616ebb86511fa99d42ce33071ec64f019eee64b9b85: Status 404 returned error can't find the container with id 5157ce04d3bbfc0ce24d4616ebb86511fa99d42ce33071ec64f019eee64b9b85 Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.974168 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.981495 4832 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.981540 4832 projected.go:194] Error preparing data for projected volume kube-api-access-qhz4r for pod openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: E1204 06:11:21.981625 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5b2ac879-133d-44de-8d0e-df502cc87c55-kube-api-access-qhz4r podName:5b2ac879-133d-44de-8d0e-df502cc87c55 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.481604002 +0000 UTC m=+138.094421718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz4r" (UniqueName: "kubernetes.io/projected/5b2ac879-133d-44de-8d0e-df502cc87c55-kube-api-access-qhz4r") pod "machine-approver-56656f9798-8lgmq" (UID: "5b2ac879-133d-44de-8d0e-df502cc87c55") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:21 crc kubenswrapper[4832]: I1204 06:11:21.993119 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.003089 4832 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.003120 4832 projected.go:194] Error preparing data for projected volume kube-api-access-r7kwt for pod openshift-controller-manager/controller-manager-879f6c89f-cpzbl: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.003188 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3e212703-f85d-4128-bbff-a3057263d6d3-kube-api-access-r7kwt podName:3e212703-f85d-4128-bbff-a3057263d6d3 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.503168735 +0000 UTC m=+138.115986441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r7kwt" (UniqueName: "kubernetes.io/projected/3e212703-f85d-4128-bbff-a3057263d6d3-kube-api-access-r7kwt") pod "controller-manager-879f6c89f-cpzbl" (UID: "3e212703-f85d-4128-bbff-a3057263d6d3") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.009422 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.029454 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.054996 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.073533 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.099989 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.100223 4832 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.100260 4832 projected.go:194] Error preparing data for projected volume kube-api-access-5gmcc for pod openshift-authentication/oauth-openshift-558db77b4-vcj7x: failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.100314 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1bc185a-fac5-4103-947a-d3d660802249-kube-api-access-5gmcc podName:d1bc185a-fac5-4103-947a-d3d660802249 nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.600296708 +0000 UTC m=+138.213114404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5gmcc" (UniqueName: "kubernetes.io/projected/d1bc185a-fac5-4103-947a-d3d660802249-kube-api-access-5gmcc") pod "oauth-openshift-558db77b4-vcj7x" (UID: "d1bc185a-fac5-4103-947a-d3d660802249") : failed to sync configmap cache: timed out waiting for the condition Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.140602 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.150246 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.169707 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.189299 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.208103 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.208203 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9f05718-aaf5-41f3-94b2-026b8eb39474-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.208232 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9f05718-aaf5-41f3-94b2-026b8eb39474-registry-certificates\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.208299 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9f05718-aaf5-41f3-94b2-026b8eb39474-trusted-ca\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.208416 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.708403492 +0000 UTC m=+138.321221198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.208471 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5bd8e08-cd81-4930-bbeb-3b7964c55cb5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f77zm\" (UID: \"e5bd8e08-cd81-4930-bbeb-3b7964c55cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.208503 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9f05718-aaf5-41f3-94b2-026b8eb39474-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.208623 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-registry-tls\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.208806 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-bound-sa-token\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.208834 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5bd8e08-cd81-4930-bbeb-3b7964c55cb5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f77zm\" (UID: \"e5bd8e08-cd81-4930-bbeb-3b7964c55cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.208862 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9r4g\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-kube-api-access-h9r4g\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.208926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5bd8e08-cd81-4930-bbeb-3b7964c55cb5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f77zm\" (UID: \"e5bd8e08-cd81-4930-bbeb-3b7964c55cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.209038 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.213267 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.230484 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.249475 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.268754 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.287697 4832 request.go:700] Waited for 1.886305487s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&limit=500&resourceVersion=0 Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.289263 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.308515 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.310578 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.310756 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5352eab3-5b2d-436a-9d0a-6627d7f4f3eb-certs\") pod \"machine-config-server-snlqx\" (UID: \"5352eab3-5b2d-436a-9d0a-6627d7f4f3eb\") " pod="openshift-machine-config-operator/machine-config-server-snlqx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.310837 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5bd8e08-cd81-4930-bbeb-3b7964c55cb5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f77zm\" (UID: \"e5bd8e08-cd81-4930-bbeb-3b7964c55cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.310856 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce-metrics-tls\") pod \"ingress-operator-5b745b69d9-fjbm4\" (UID: \"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.310871 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5352eab3-5b2d-436a-9d0a-6627d7f4f3eb-node-bootstrap-token\") pod \"machine-config-server-snlqx\" (UID: \"5352eab3-5b2d-436a-9d0a-6627d7f4f3eb\") " pod="openshift-machine-config-operator/machine-config-server-snlqx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.310936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0bf57fcb-7a14-4523-90f9-0a62334537cf-metrics-tls\") pod \"dns-default-t8dcw\" (UID: \"0bf57fcb-7a14-4523-90f9-0a62334537cf\") " pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.310953 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3bdc749a-22f3-4cb8-b987-04f7bc297cde-stats-auth\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.310994 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-socket-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311040 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-plugins-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311065 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3-srv-cert\") pod \"olm-operator-6b444d44fb-8ft9t\" (UID: \"48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311089 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vgbj\" (UniqueName: \"kubernetes.io/projected/3d59fb7a-ef01-4919-8060-615a77afd343-kube-api-access-4vgbj\") pod \"package-server-manager-789f6589d5-nqbbx\" (UID: \"3d59fb7a-ef01-4919-8060-615a77afd343\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311157 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pqqsl\" (UID: \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311178 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3bdc749a-22f3-4cb8-b987-04f7bc297cde-default-certificate\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311194 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrdg8\" (UniqueName: \"kubernetes.io/projected/5352eab3-5b2d-436a-9d0a-6627d7f4f3eb-kube-api-access-nrdg8\") pod \"machine-config-server-snlqx\" (UID: \"5352eab3-5b2d-436a-9d0a-6627d7f4f3eb\") " pod="openshift-machine-config-operator/machine-config-server-snlqx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311209 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9f05718-aaf5-41f3-94b2-026b8eb39474-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311225 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssl8q\" (UniqueName: \"kubernetes.io/projected/0bf57fcb-7a14-4523-90f9-0a62334537cf-kube-api-access-ssl8q\") pod \"dns-default-t8dcw\" (UID: \"0bf57fcb-7a14-4523-90f9-0a62334537cf\") " pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311239 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-registration-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311256 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee453606-81c1-43d2-9121-c7a830f193cc-cert\") pod \"ingress-canary-zknlt\" (UID: \"ee453606-81c1-43d2-9121-c7a830f193cc\") " pod="openshift-ingress-canary/ingress-canary-zknlt" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311295 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9f05718-aaf5-41f3-94b2-026b8eb39474-registry-certificates\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311326 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9f05718-aaf5-41f3-94b2-026b8eb39474-trusted-ca\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311341 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc09cb39-1b31-47c6-88c7-8c15d31c4960-secret-volume\") pod \"collect-profiles-29413800-89r85\" (UID: \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311462 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c64d9ec8-5ba3-455b-8932-d73b55863bf6-signing-key\") pod \"service-ca-9c57cc56f-wr2lk\" (UID: \"c64d9ec8-5ba3-455b-8932-d73b55863bf6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311518 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr7x4\" (UniqueName: \"kubernetes.io/projected/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-kube-api-access-gr7x4\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311532 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-csi-data-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5bd8e08-cd81-4930-bbeb-3b7964c55cb5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f77zm\" (UID: \"e5bd8e08-cd81-4930-bbeb-3b7964c55cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311620 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9f05718-aaf5-41f3-94b2-026b8eb39474-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311645 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pqqsl\" (UID: \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311670 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-927r4\" (UniqueName: \"kubernetes.io/projected/2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce-kube-api-access-927r4\") pod \"ingress-operator-5b745b69d9-fjbm4\" (UID: \"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311687 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d59fb7a-ef01-4919-8060-615a77afd343-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nqbbx\" (UID: \"3d59fb7a-ef01-4919-8060-615a77afd343\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311765 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8ft9t\" (UID: \"48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311794 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmjdg\" (UniqueName: \"kubernetes.io/projected/3bdc749a-22f3-4cb8-b987-04f7bc297cde-kube-api-access-wmjdg\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311809 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssdft\" (UniqueName: \"kubernetes.io/projected/bc09cb39-1b31-47c6-88c7-8c15d31c4960-kube-api-access-ssdft\") pod \"collect-profiles-29413800-89r85\" (UID: \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311848 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zj5d\" (UniqueName: \"kubernetes.io/projected/95ed2e36-8e95-4012-be78-7f7e66d0349f-kube-api-access-4zj5d\") pod \"migrator-59844c95c7-9nf4f\" (UID: \"95ed2e36-8e95-4012-be78-7f7e66d0349f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nf4f" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311916 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fjbm4\" (UID: \"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.311960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc09cb39-1b31-47c6-88c7-8c15d31c4960-config-volume\") pod \"collect-profiles-29413800-89r85\" (UID: \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312053 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-registry-tls\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312071 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqbrl\" (UniqueName: \"kubernetes.io/projected/c64d9ec8-5ba3-455b-8932-d73b55863bf6-kube-api-access-rqbrl\") pod \"service-ca-9c57cc56f-wr2lk\" (UID: \"c64d9ec8-5ba3-455b-8932-d73b55863bf6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312086 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bf57fcb-7a14-4523-90f9-0a62334537cf-config-volume\") pod \"dns-default-t8dcw\" (UID: \"0bf57fcb-7a14-4523-90f9-0a62334537cf\") " pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312128 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce-trusted-ca\") pod \"ingress-operator-5b745b69d9-fjbm4\" (UID: \"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312155 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bdc749a-22f3-4cb8-b987-04f7bc297cde-metrics-certs\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312175 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5m7\" (UniqueName: \"kubernetes.io/projected/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-kube-api-access-pn5m7\") pod \"marketplace-operator-79b997595-pqqsl\" (UID: \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312202 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-bound-sa-token\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312219 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5bd8e08-cd81-4930-bbeb-3b7964c55cb5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f77zm\" (UID: \"e5bd8e08-cd81-4930-bbeb-3b7964c55cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312289 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9r4g\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-kube-api-access-h9r4g\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312305 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2tjg\" (UniqueName: \"kubernetes.io/projected/48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3-kube-api-access-h2tjg\") pod \"olm-operator-6b444d44fb-8ft9t\" (UID: \"48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312349 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c64d9ec8-5ba3-455b-8932-d73b55863bf6-signing-cabundle\") pod \"service-ca-9c57cc56f-wr2lk\" (UID: \"c64d9ec8-5ba3-455b-8932-d73b55863bf6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312375 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-mountpoint-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312447 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bdc749a-22f3-4cb8-b987-04f7bc297cde-service-ca-bundle\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.312464 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7r45\" (UniqueName: \"kubernetes.io/projected/ee453606-81c1-43d2-9121-c7a830f193cc-kube-api-access-c7r45\") pod \"ingress-canary-zknlt\" (UID: \"ee453606-81c1-43d2-9121-c7a830f193cc\") " pod="openshift-ingress-canary/ingress-canary-zknlt" Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.312563 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.812549959 +0000 UTC m=+138.425367665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.315595 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5bd8e08-cd81-4930-bbeb-3b7964c55cb5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f77zm\" (UID: \"e5bd8e08-cd81-4930-bbeb-3b7964c55cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.318408 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9f05718-aaf5-41f3-94b2-026b8eb39474-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.319284 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9f05718-aaf5-41f3-94b2-026b8eb39474-registry-certificates\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.321970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9f05718-aaf5-41f3-94b2-026b8eb39474-trusted-ca\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.322571 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5bd8e08-cd81-4930-bbeb-3b7964c55cb5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f77zm\" (UID: \"e5bd8e08-cd81-4930-bbeb-3b7964c55cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.322703 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9f05718-aaf5-41f3-94b2-026b8eb39474-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.328207 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" event={"ID":"8b214f93-e9ab-4500-9c6b-6319c5570459","Type":"ContainerStarted","Data":"e647170afa3595726056aa0dd56f4b9ed607a2277013d377a89ca9a8082cfe7a"} Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.328257 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" event={"ID":"8b214f93-e9ab-4500-9c6b-6319c5570459","Type":"ContainerStarted","Data":"5157ce04d3bbfc0ce24d4616ebb86511fa99d42ce33071ec64f019eee64b9b85"} Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.328767 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.330987 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-n68j8" event={"ID":"f23bd041-73a3-4443-869f-b7d6221d3763","Type":"ContainerStarted","Data":"1583c9169fcd761c231f0b1acf706200c3d0cf2349dd228cf146be4054b2994e"} Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.331204 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.332446 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" event={"ID":"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905","Type":"ContainerStarted","Data":"7df3ce7feaee0ea78864bb905b85180b824ac11f9a7d96ab52c736b96cf5044a"} Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.332469 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" event={"ID":"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905","Type":"ContainerStarted","Data":"2f7fed5bb81ad6e0660df2f10b9b22a8a6f542b07f24ae6222afee0d5e6aed91"} Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.332546 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.333994 4832 generic.go:334] "Generic (PLEG): container finished" podID="96d01e99-95dd-4969-ab53-f94c7383886f" containerID="a55f9607d84bb690535f763a00e04c8af18a7fca1f63f424937dd699c85a74a4" exitCode=0 Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.334122 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" event={"ID":"96d01e99-95dd-4969-ab53-f94c7383886f","Type":"ContainerDied","Data":"a55f9607d84bb690535f763a00e04c8af18a7fca1f63f424937dd699c85a74a4"} Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.334147 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" event={"ID":"96d01e99-95dd-4969-ab53-f94c7383886f","Type":"ContainerStarted","Data":"441c840998c763a15703620f0e0783600fa58d520bb0f7f322cc8b222f6ac382"} Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.351638 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.352061 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" event={"ID":"4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9","Type":"ContainerStarted","Data":"e4984abd1126db8d49b84084c156bfba224f52a9eded966f51d3f229be1ca9ee"} Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.352101 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" event={"ID":"4f1eacc2-57a4-4722-ad4c-fc99be3f9cd9","Type":"ContainerStarted","Data":"09fe0a5ec6683b720f7db274e641bd7765e0e2fd5f2136ebadb93199ed7e75e8"} Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.352822 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-registry-tls\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.364642 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-62qg2"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.379363 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d2rth"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.379886 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.381482 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.395683 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.412844 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414462 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2tjg\" (UniqueName: \"kubernetes.io/projected/48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3-kube-api-access-h2tjg\") pod \"olm-operator-6b444d44fb-8ft9t\" (UID: \"48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414494 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c64d9ec8-5ba3-455b-8932-d73b55863bf6-signing-cabundle\") pod \"service-ca-9c57cc56f-wr2lk\" (UID: \"c64d9ec8-5ba3-455b-8932-d73b55863bf6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414511 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-mountpoint-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414583 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bdc749a-22f3-4cb8-b987-04f7bc297cde-service-ca-bundle\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414602 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7r45\" (UniqueName: \"kubernetes.io/projected/ee453606-81c1-43d2-9121-c7a830f193cc-kube-api-access-c7r45\") pod \"ingress-canary-zknlt\" (UID: \"ee453606-81c1-43d2-9121-c7a830f193cc\") " pod="openshift-ingress-canary/ingress-canary-zknlt" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414618 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5352eab3-5b2d-436a-9d0a-6627d7f4f3eb-certs\") pod \"machine-config-server-snlqx\" (UID: \"5352eab3-5b2d-436a-9d0a-6627d7f4f3eb\") " pod="openshift-machine-config-operator/machine-config-server-snlqx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414649 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce-metrics-tls\") pod \"ingress-operator-5b745b69d9-fjbm4\" (UID: \"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414663 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5352eab3-5b2d-436a-9d0a-6627d7f4f3eb-node-bootstrap-token\") pod \"machine-config-server-snlqx\" (UID: \"5352eab3-5b2d-436a-9d0a-6627d7f4f3eb\") " pod="openshift-machine-config-operator/machine-config-server-snlqx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414695 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3bdc749a-22f3-4cb8-b987-04f7bc297cde-stats-auth\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414740 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0bf57fcb-7a14-4523-90f9-0a62334537cf-metrics-tls\") pod \"dns-default-t8dcw\" (UID: \"0bf57fcb-7a14-4523-90f9-0a62334537cf\") " pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414787 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-socket-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414804 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-plugins-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414821 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3-srv-cert\") pod \"olm-operator-6b444d44fb-8ft9t\" (UID: \"48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414839 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414857 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vgbj\" (UniqueName: \"kubernetes.io/projected/3d59fb7a-ef01-4919-8060-615a77afd343-kube-api-access-4vgbj\") pod \"package-server-manager-789f6589d5-nqbbx\" (UID: \"3d59fb7a-ef01-4919-8060-615a77afd343\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pqqsl\" (UID: \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414948 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3bdc749a-22f3-4cb8-b987-04f7bc297cde-default-certificate\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.414974 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrdg8\" (UniqueName: \"kubernetes.io/projected/5352eab3-5b2d-436a-9d0a-6627d7f4f3eb-kube-api-access-nrdg8\") pod \"machine-config-server-snlqx\" (UID: \"5352eab3-5b2d-436a-9d0a-6627d7f4f3eb\") " pod="openshift-machine-config-operator/machine-config-server-snlqx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssl8q\" (UniqueName: \"kubernetes.io/projected/0bf57fcb-7a14-4523-90f9-0a62334537cf-kube-api-access-ssl8q\") pod \"dns-default-t8dcw\" (UID: \"0bf57fcb-7a14-4523-90f9-0a62334537cf\") " pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415021 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-registration-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415038 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee453606-81c1-43d2-9121-c7a830f193cc-cert\") pod \"ingress-canary-zknlt\" (UID: \"ee453606-81c1-43d2-9121-c7a830f193cc\") " pod="openshift-ingress-canary/ingress-canary-zknlt" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415113 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc09cb39-1b31-47c6-88c7-8c15d31c4960-secret-volume\") pod \"collect-profiles-29413800-89r85\" (UID: \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415179 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c64d9ec8-5ba3-455b-8932-d73b55863bf6-signing-key\") pod \"service-ca-9c57cc56f-wr2lk\" (UID: \"c64d9ec8-5ba3-455b-8932-d73b55863bf6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415206 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr7x4\" (UniqueName: \"kubernetes.io/projected/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-kube-api-access-gr7x4\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415258 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-csi-data-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415297 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pqqsl\" (UID: \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415313 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-927r4\" (UniqueName: \"kubernetes.io/projected/2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce-kube-api-access-927r4\") pod \"ingress-operator-5b745b69d9-fjbm4\" (UID: \"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415329 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d59fb7a-ef01-4919-8060-615a77afd343-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nqbbx\" (UID: \"3d59fb7a-ef01-4919-8060-615a77afd343\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415347 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8ft9t\" (UID: \"48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415362 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssdft\" (UniqueName: \"kubernetes.io/projected/bc09cb39-1b31-47c6-88c7-8c15d31c4960-kube-api-access-ssdft\") pod \"collect-profiles-29413800-89r85\" (UID: \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415418 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmjdg\" (UniqueName: \"kubernetes.io/projected/3bdc749a-22f3-4cb8-b987-04f7bc297cde-kube-api-access-wmjdg\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415453 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zj5d\" (UniqueName: \"kubernetes.io/projected/95ed2e36-8e95-4012-be78-7f7e66d0349f-kube-api-access-4zj5d\") pod \"migrator-59844c95c7-9nf4f\" (UID: \"95ed2e36-8e95-4012-be78-7f7e66d0349f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nf4f" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415488 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fjbm4\" (UID: \"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415511 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc09cb39-1b31-47c6-88c7-8c15d31c4960-config-volume\") pod \"collect-profiles-29413800-89r85\" (UID: \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415547 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqbrl\" (UniqueName: \"kubernetes.io/projected/c64d9ec8-5ba3-455b-8932-d73b55863bf6-kube-api-access-rqbrl\") pod \"service-ca-9c57cc56f-wr2lk\" (UID: \"c64d9ec8-5ba3-455b-8932-d73b55863bf6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415588 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bf57fcb-7a14-4523-90f9-0a62334537cf-config-volume\") pod \"dns-default-t8dcw\" (UID: \"0bf57fcb-7a14-4523-90f9-0a62334537cf\") " pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415613 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce-trusted-ca\") pod \"ingress-operator-5b745b69d9-fjbm4\" (UID: \"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415627 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn5m7\" (UniqueName: \"kubernetes.io/projected/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-kube-api-access-pn5m7\") pod \"marketplace-operator-79b997595-pqqsl\" (UID: \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.415642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bdc749a-22f3-4cb8-b987-04f7bc297cde-metrics-certs\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.417091 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c64d9ec8-5ba3-455b-8932-d73b55863bf6-signing-cabundle\") pod \"service-ca-9c57cc56f-wr2lk\" (UID: \"c64d9ec8-5ba3-455b-8932-d73b55863bf6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.417157 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-mountpoint-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.417548 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-csi-data-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.417715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-socket-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.417736 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bdc749a-22f3-4cb8-b987-04f7bc297cde-service-ca-bundle\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.417778 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-plugins-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.418346 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc09cb39-1b31-47c6-88c7-8c15d31c4960-config-volume\") pod \"collect-profiles-29413800-89r85\" (UID: \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.418476 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-registration-dir\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.421328 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:22.921313369 +0000 UTC m=+138.534131075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.422049 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bf57fcb-7a14-4523-90f9-0a62334537cf-config-volume\") pod \"dns-default-t8dcw\" (UID: \"0bf57fcb-7a14-4523-90f9-0a62334537cf\") " pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.424379 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce-trusted-ca\") pod \"ingress-operator-5b745b69d9-fjbm4\" (UID: \"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.430837 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pqqsl\" (UID: \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.435035 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5352eab3-5b2d-436a-9d0a-6627d7f4f3eb-certs\") pod \"machine-config-server-snlqx\" (UID: \"5352eab3-5b2d-436a-9d0a-6627d7f4f3eb\") " pod="openshift-machine-config-operator/machine-config-server-snlqx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.435067 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3bdc749a-22f3-4cb8-b987-04f7bc297cde-stats-auth\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.435103 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc09cb39-1b31-47c6-88c7-8c15d31c4960-secret-volume\") pod \"collect-profiles-29413800-89r85\" (UID: \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.435101 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8ft9t\" (UID: \"48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.435526 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d59fb7a-ef01-4919-8060-615a77afd343-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nqbbx\" (UID: \"3d59fb7a-ef01-4919-8060-615a77afd343\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.435548 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bdc749a-22f3-4cb8-b987-04f7bc297cde-metrics-certs\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.439508 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c64d9ec8-5ba3-455b-8932-d73b55863bf6-signing-key\") pod \"service-ca-9c57cc56f-wr2lk\" (UID: \"c64d9ec8-5ba3-455b-8932-d73b55863bf6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.440643 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3-srv-cert\") pod \"olm-operator-6b444d44fb-8ft9t\" (UID: \"48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.441174 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce-metrics-tls\") pod \"ingress-operator-5b745b69d9-fjbm4\" (UID: \"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.442577 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5352eab3-5b2d-436a-9d0a-6627d7f4f3eb-node-bootstrap-token\") pod \"machine-config-server-snlqx\" (UID: \"5352eab3-5b2d-436a-9d0a-6627d7f4f3eb\") " pod="openshift-machine-config-operator/machine-config-server-snlqx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.445562 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pqqsl\" (UID: \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.446082 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3bdc749a-22f3-4cb8-b987-04f7bc297cde-default-certificate\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.446700 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee453606-81c1-43d2-9121-c7a830f193cc-cert\") pod \"ingress-canary-zknlt\" (UID: \"ee453606-81c1-43d2-9121-c7a830f193cc\") " pod="openshift-ingress-canary/ingress-canary-zknlt" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.448630 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0bf57fcb-7a14-4523-90f9-0a62334537cf-metrics-tls\") pod \"dns-default-t8dcw\" (UID: \"0bf57fcb-7a14-4523-90f9-0a62334537cf\") " pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.448859 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.449104 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.453915 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.468825 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.489522 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.510207 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.517797 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.528962 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz4r\" (UniqueName: \"kubernetes.io/projected/5b2ac879-133d-44de-8d0e-df502cc87c55-kube-api-access-qhz4r\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.529170 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7kwt\" (UniqueName: \"kubernetes.io/projected/3e212703-f85d-4128-bbff-a3057263d6d3-kube-api-access-r7kwt\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.552817 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:23.052791799 +0000 UTC m=+138.665609505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.556563 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tw7nf"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.556604 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.557009 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.557200 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.561070 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhz4r\" (UniqueName: \"kubernetes.io/projected/5b2ac879-133d-44de-8d0e-df502cc87c55-kube-api-access-qhz4r\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.565155 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7kwt\" (UniqueName: \"kubernetes.io/projected/3e212703-f85d-4128-bbff-a3057263d6d3-kube-api-access-r7kwt\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.590517 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.590970 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.613365 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-bound-sa-token\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.634674 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.635317 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gmcc\" (UniqueName: \"kubernetes.io/projected/d1bc185a-fac5-4103-947a-d3d660802249-kube-api-access-5gmcc\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.635374 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:23.135353974 +0000 UTC m=+138.748171760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.635698 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5bd8e08-cd81-4930-bbeb-3b7964c55cb5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f77zm\" (UID: \"e5bd8e08-cd81-4930-bbeb-3b7964c55cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.636986 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.648726 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gmcc\" (UniqueName: \"kubernetes.io/projected/d1bc185a-fac5-4103-947a-d3d660802249-kube-api-access-5gmcc\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.682188 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9r4g\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-kube-api-access-h9r4g\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.696624 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.696684 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.696696 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g2thm"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.697901 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.703095 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.731228 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2tjg\" (UniqueName: \"kubernetes.io/projected/48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3-kube-api-access-h2tjg\") pod \"olm-operator-6b444d44fb-8ft9t\" (UID: \"48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.731897 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-927r4\" (UniqueName: \"kubernetes.io/projected/2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce-kube-api-access-927r4\") pod \"ingress-operator-5b745b69d9-fjbm4\" (UID: \"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.733894 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssl8q\" (UniqueName: \"kubernetes.io/projected/0bf57fcb-7a14-4523-90f9-0a62334537cf-kube-api-access-ssl8q\") pod \"dns-default-t8dcw\" (UID: \"0bf57fcb-7a14-4523-90f9-0a62334537cf\") " pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.735608 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7r45\" (UniqueName: \"kubernetes.io/projected/ee453606-81c1-43d2-9121-c7a830f193cc-kube-api-access-c7r45\") pod \"ingress-canary-zknlt\" (UID: \"ee453606-81c1-43d2-9121-c7a830f193cc\") " pod="openshift-ingress-canary/ingress-canary-zknlt" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.736025 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.736428 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:23.236373744 +0000 UTC m=+138.849191450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.761927 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.761995 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr7x4\" (UniqueName: \"kubernetes.io/projected/020db14a-b4ac-432d-8c8a-bd3ae7cac2b4-kube-api-access-gr7x4\") pod \"csi-hostpathplugin-fw5rj\" (UID: \"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4\") " pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.762004 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-766fs"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.775831 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssdft\" (UniqueName: \"kubernetes.io/projected/bc09cb39-1b31-47c6-88c7-8c15d31c4960-kube-api-access-ssdft\") pod \"collect-profiles-29413800-89r85\" (UID: \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.786048 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-n68j8" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.799845 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmjdg\" (UniqueName: \"kubernetes.io/projected/3bdc749a-22f3-4cb8-b987-04f7bc297cde-kube-api-access-wmjdg\") pod \"router-default-5444994796-jdgxv\" (UID: \"3bdc749a-22f3-4cb8-b987-04f7bc297cde\") " pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.800442 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.807858 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.810879 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zj5d\" (UniqueName: \"kubernetes.io/projected/95ed2e36-8e95-4012-be78-7f7e66d0349f-kube-api-access-4zj5d\") pod \"migrator-59844c95c7-9nf4f\" (UID: \"95ed2e36-8e95-4012-be78-7f7e66d0349f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nf4f" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.830855 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fjbm4\" (UID: \"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.836995 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.838983 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-serving-cert\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839024 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839049 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839090 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839105 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-etcd-client\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839126 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839145 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-client-ca\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839182 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-client\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839207 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b2ac879-133d-44de-8d0e-df502cc87c55-auth-proxy-config\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839235 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-encryption-config\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839255 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-encryption-config\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839285 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b2ac879-133d-44de-8d0e-df502cc87c55-machine-approver-tls\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839305 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-audit-policies\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839324 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839348 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-config\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839371 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839414 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839448 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839471 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839488 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839507 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-serving-cert\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839523 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e212703-f85d-4128-bbff-a3057263d6d3-serving-cert\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839538 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-etcd-serving-ca\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839556 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-audit\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839574 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-image-import-ca\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.839598 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.840124 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-audit-policies\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.840793 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.840846 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.841448 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.841836 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-audit\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.842905 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-image-import-ca\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.843509 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.846773 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:23.346751124 +0000 UTC m=+138.959568830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.845212 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.845748 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-client-ca\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.846726 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.844816 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.851735 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.854025 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.856437 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.857251 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-etcd-client\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.857427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-serving-cert\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.857808 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrdg8\" (UniqueName: \"kubernetes.io/projected/5352eab3-5b2d-436a-9d0a-6627d7f4f3eb-kube-api-access-nrdg8\") pod \"machine-config-server-snlqx\" (UID: \"5352eab3-5b2d-436a-9d0a-6627d7f4f3eb\") " pod="openshift-machine-config-operator/machine-config-server-snlqx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.857836 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e212703-f85d-4128-bbff-a3057263d6d3-serving-cert\") pod \"controller-manager-879f6c89f-cpzbl\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.857865 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.858261 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-config\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.858314 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/21e75fec-8174-41c0-82b1-a01786d46246-etcd-serving-ca\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.858670 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-snlqx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.858842 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b2ac879-133d-44de-8d0e-df502cc87c55-auth-proxy-config\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.859451 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.859469 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-serving-cert\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.859997 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-etcd-client\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.861611 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a07eda47-4b27-4396-90a1-a6a1569a6f99-encryption-config\") pod \"apiserver-7bbb656c7d-9s7hj\" (UID: \"a07eda47-4b27-4396-90a1-a6a1569a6f99\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.861683 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.863451 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/21e75fec-8174-41c0-82b1-a01786d46246-encryption-config\") pod \"apiserver-76f77b778f-wtnbm\" (UID: \"21e75fec-8174-41c0-82b1-a01786d46246\") " pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.864743 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b2ac879-133d-44de-8d0e-df502cc87c55-machine-approver-tls\") pod \"machine-approver-56656f9798-8lgmq\" (UID: \"5b2ac879-133d-44de-8d0e-df502cc87c55\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.865099 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-stxqv"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.873990 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.877267 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vcj7x\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.878114 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vgbj\" (UniqueName: \"kubernetes.io/projected/3d59fb7a-ef01-4919-8060-615a77afd343-kube-api-access-4vgbj\") pod \"package-server-manager-789f6589d5-nqbbx\" (UID: \"3d59fb7a-ef01-4919-8060-615a77afd343\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.887718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.890629 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqbrl\" (UniqueName: \"kubernetes.io/projected/c64d9ec8-5ba3-455b-8932-d73b55863bf6-kube-api-access-rqbrl\") pod \"service-ca-9c57cc56f-wr2lk\" (UID: \"c64d9ec8-5ba3-455b-8932-d73b55863bf6\") " pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.887172 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zknlt" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.940234 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn5m7\" (UniqueName: \"kubernetes.io/projected/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-kube-api-access-pn5m7\") pod \"marketplace-operator-79b997595-pqqsl\" (UID: \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\") " pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.942796 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:22 crc kubenswrapper[4832]: E1204 06:11:22.943353 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:23.443327544 +0000 UTC m=+139.056145300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.951488 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.961416 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pqfp2"] Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.995272 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:22 crc kubenswrapper[4832]: I1204 06:11:22.997973 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-n68j8" podStartSLOduration=119.99795655 podStartE2EDuration="1m59.99795655s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:22.997557361 +0000 UTC m=+138.610375067" watchObservedRunningTime="2025-12-04 06:11:22.99795655 +0000 UTC m=+138.610774256" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.046489 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:23 crc kubenswrapper[4832]: E1204 06:11:23.046836 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:23.546825655 +0000 UTC m=+139.159643361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.074297 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.079262 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nf4f" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.087320 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.094567 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.106520 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.129849 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.134714 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.144218 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.145741 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.147836 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:23 crc kubenswrapper[4832]: E1204 06:11:23.148334 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:23.648303086 +0000 UTC m=+139.261120792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.250375 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:23 crc kubenswrapper[4832]: E1204 06:11:23.251062 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:23.751045888 +0000 UTC m=+139.363863594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.353075 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:23 crc kubenswrapper[4832]: E1204 06:11:23.353852 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:23.8538321 +0000 UTC m=+139.466649806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.363094 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" event={"ID":"0ab7979f-0ea9-471f-a71a-75f869d58f14","Type":"ContainerStarted","Data":"82ef9d563b83b03e8caeda50105dd959bb8f828be64702b6b450efecd2178d01"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.364117 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" event={"ID":"c52abcc0-4f0f-4094-9cbb-3bbad9978f53","Type":"ContainerStarted","Data":"d6ad706aea42572cb6f0c3ea36258b78b93522bbc8c7a401c06d742743d4f4e0"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.364139 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" event={"ID":"c52abcc0-4f0f-4094-9cbb-3bbad9978f53","Type":"ContainerStarted","Data":"5e6543a642a2dcd46ace3b0ce701b4714b50527e8412740897d6af60a28e50a2"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.376644 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" event={"ID":"ecf4e5af-74f0-44c8-9231-0719fa4d0f16","Type":"ContainerStarted","Data":"3e7754fa93618110fa4c3fb939930ab9af1a03b485c3fb21c0308037f7b9f74c"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.376680 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" event={"ID":"ecf4e5af-74f0-44c8-9231-0719fa4d0f16","Type":"ContainerStarted","Data":"27c6a5035435747a1323658cc85694330a19cd924b6cc8a6c238ae06e31fdd88"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.376690 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" event={"ID":"ecf4e5af-74f0-44c8-9231-0719fa4d0f16","Type":"ContainerStarted","Data":"c2f3e0e6a9a0dd2a36a3f569a188c6dae7e1d307d1a11295db5a2af03c7536cf"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.405352 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm"] Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.454635 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.455739 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" event={"ID":"48194ca4-ce47-4309-a085-339c9b14f42b","Type":"ContainerStarted","Data":"d3c3c9c6adbccb9e1f8361741b18682fa1ef82d5d404f9537bba20f319605171"} Dec 04 06:11:23 crc kubenswrapper[4832]: E1204 06:11:23.469352 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:23.969333437 +0000 UTC m=+139.582151143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.513010 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" event={"ID":"35c827cd-26e4-4f7a-ba65-bb717839a8d4","Type":"ContainerStarted","Data":"63ec760580568b971f57b69cffc05eef28c86d9a7588af54e58a8a6606329d85"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.513047 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" event={"ID":"35c827cd-26e4-4f7a-ba65-bb717839a8d4","Type":"ContainerStarted","Data":"4f6c89b582409a66c510e396f5a444fd20bedbba0cf524cec97c9455138b862a"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.530381 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" event={"ID":"d4de5b92-e3b3-480f-9241-23e3603eaff2","Type":"ContainerStarted","Data":"385d879bcbc283d8d0d061e982d8a9665e60f33a418b57ddbd5ae53867c12252"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.530439 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" event={"ID":"d4de5b92-e3b3-480f-9241-23e3603eaff2","Type":"ContainerStarted","Data":"8a3aacedaca0253dd16636250f160a616b1f831ff15d9169a1747b4ddc267cd2"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.531627 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.547291 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jdgxv" event={"ID":"3bdc749a-22f3-4cb8-b987-04f7bc297cde","Type":"ContainerStarted","Data":"e27c8623c46aa7393ef4e56e629af6d7212efe48199e4182823d6925d0faa283"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.549724 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.550079 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" event={"ID":"a9d3c038-c11e-4925-8802-5c8b57b1aeef","Type":"ContainerStarted","Data":"cd1de5046ef2494ce921a6c09b359a2bafe7266c0e249d2c704ffaae7c100f8b"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.550102 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" event={"ID":"a9d3c038-c11e-4925-8802-5c8b57b1aeef","Type":"ContainerStarted","Data":"dab649bb461c59e81a726e6bc153581a9f5ddb949e67e724429706b7ad3a21d6"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.551493 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" event={"ID":"60021a99-658a-4bde-81c3-dae4f4870628","Type":"ContainerStarted","Data":"5c3d8167eca37cb7b16489c7b4da1c8eb461454537d4d4b178f40056a3ef05b4"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.554722 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tw7nf" event={"ID":"b9cd00db-0b78-4c09-8063-2c2bd201fe57","Type":"ContainerStarted","Data":"03369fff8a9300068380c3c45ecf42ecc5d76761a654ca56787e36497457377d"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.554775 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tw7nf" event={"ID":"b9cd00db-0b78-4c09-8063-2c2bd201fe57","Type":"ContainerStarted","Data":"fc5af019b7800961dce47a8a6c83f9c6b671ae36fa0a4d44cb86f8570080aadc"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.555746 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.555898 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tw7nf" Dec 04 06:11:23 crc kubenswrapper[4832]: E1204 06:11:23.556108 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:24.056076415 +0000 UTC m=+139.668894131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.564729 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-tw7nf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.564772 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tw7nf" podUID="b9cd00db-0b78-4c09-8063-2c2bd201fe57" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.595249 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" event={"ID":"3c4dc6e5-57fe-454f-89e6-7c37768004b4","Type":"ContainerStarted","Data":"ec407bc51910b91a44c831c7316758a45e3a61058d37343b829a44e5f9b2d2df"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.599073 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85"] Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.635899 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r" event={"ID":"d35e6baa-6315-48ee-904c-05da7d436283","Type":"ContainerStarted","Data":"fa5699a94a585fd9bd7c2c95f0a80511f6f5b3093c826b01421ca34662648c0e"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.657250 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:23 crc kubenswrapper[4832]: E1204 06:11:23.661244 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:24.161230457 +0000 UTC m=+139.774048163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.685951 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-766fs" event={"ID":"361acab0-1cd6-48fc-b6ef-c77dc3092f98","Type":"ContainerStarted","Data":"a570c86431774072f810b2270876fcc4448e5b90af4d784dd7421de1cce8c4e7"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.691334 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" event={"ID":"a68efa85-b0ea-4db2-9b73-0dc87b2c8328","Type":"ContainerStarted","Data":"ed8a8ce7823481afbd82e9b65ac5e9e24da31adfe8074b90aca69376f138f16e"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.701074 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" event={"ID":"ce327d25-76b0-4c8b-a163-05f4e0976c34","Type":"ContainerStarted","Data":"960cecd378b68951fc5c6b49caea19175043874460ebd5e66f70e53571985abb"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.701131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" event={"ID":"ce327d25-76b0-4c8b-a163-05f4e0976c34","Type":"ContainerStarted","Data":"b93d558d8798d21b99dad90a9f94d2ded62dfae54955ca3aadd0e76505d0b969"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.706319 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2thm" event={"ID":"50fb7e5f-0fc6-47d2-a953-8fece3489792","Type":"ContainerStarted","Data":"42c2e2247b6ae8c20ba42d020b1fa00e3e998f73f3a84d4a11b6396f2d98a26a"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.733091 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-stxqv" event={"ID":"3ac11867-dffb-4aa1-88ba-d607d5d6f97a","Type":"ContainerStarted","Data":"07a7338a8759290d0fc538310e0413f0bf25c6dd43962fc6195692e725cde282"} Dec 04 06:11:23 crc kubenswrapper[4832]: W1204 06:11:23.735908 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b2ac879_133d_44de_8d0e_df502cc87c55.slice/crio-79cfd6eb064b000cb33f9f41d7d4d43c07d3a33c38693f302735c1a4fed5de8c WatchSource:0}: Error finding container 79cfd6eb064b000cb33f9f41d7d4d43c07d3a33c38693f302735c1a4fed5de8c: Status 404 returned error can't find the container with id 79cfd6eb064b000cb33f9f41d7d4d43c07d3a33c38693f302735c1a4fed5de8c Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.743475 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-t8dcw"] Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.750954 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" event={"ID":"96d01e99-95dd-4969-ab53-f94c7383886f","Type":"ContainerStarted","Data":"ee175d14f859061552fd86ccdbabbae45eef610d6024ac7c6ba275493c1fa30b"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.751496 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.765272 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:23 crc kubenswrapper[4832]: E1204 06:11:23.766255 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:24.266240494 +0000 UTC m=+139.879058200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.822919 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" event={"ID":"8b214f93-e9ab-4500-9c6b-6319c5570459","Type":"ContainerStarted","Data":"3d30dbd7be5c920e5487585b8cc16fdc6c6af66b75756c43c05dce8bd1f2c1d3"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.849483 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" event={"ID":"4a423478-8008-4169-a257-ee5b0701c460","Type":"ContainerStarted","Data":"7ae4fe689e156beeb6788359619b072887e2898724e6b2bc07826dbc9e8d0efd"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.849541 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" event={"ID":"4a423478-8008-4169-a257-ee5b0701c460","Type":"ContainerStarted","Data":"1615cb9ea28ea85ebf40d1b559e609be69e96c56c78be3626cb3a1799697c596"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.857923 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg" event={"ID":"e57a2b10-8b23-4085-a031-3263b4265ccc","Type":"ContainerStarted","Data":"a7d20a6c79c31b00fb205be7121e84d352d29b27b13e36b8745773d36c30a859"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.857963 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg" event={"ID":"e57a2b10-8b23-4085-a031-3263b4265ccc","Type":"ContainerStarted","Data":"7a8e33ff1f7881bb7f3a68ba4545bdd45c08401e98d527d9437013bea7dc2349"} Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.869663 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:23 crc kubenswrapper[4832]: E1204 06:11:23.871159 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:24.37113091 +0000 UTC m=+139.983948616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:23 crc kubenswrapper[4832]: I1204 06:11:23.970695 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:23 crc kubenswrapper[4832]: E1204 06:11:23.971995 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:24.471979935 +0000 UTC m=+140.084797641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.073644 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:24 crc kubenswrapper[4832]: E1204 06:11:24.073978 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:24.573964279 +0000 UTC m=+140.186781985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.167581 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" podStartSLOduration=120.167566435 podStartE2EDuration="2m0.167566435s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:24.124883943 +0000 UTC m=+139.737701639" watchObservedRunningTime="2025-12-04 06:11:24.167566435 +0000 UTC m=+139.780384131" Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.175433 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:24 crc kubenswrapper[4832]: E1204 06:11:24.175782 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:24.675767297 +0000 UTC m=+140.288585003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:24 crc kubenswrapper[4832]: W1204 06:11:24.188424 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bf57fcb_7a14_4523_90f9_0a62334537cf.slice/crio-05ac489271af784ccd8bf1ef29b8d048c2c9409883f2b8c6febd109483800f63 WatchSource:0}: Error finding container 05ac489271af784ccd8bf1ef29b8d048c2c9409883f2b8c6febd109483800f63: Status 404 returned error can't find the container with id 05ac489271af784ccd8bf1ef29b8d048c2c9409883f2b8c6febd109483800f63 Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.288997 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:24 crc kubenswrapper[4832]: E1204 06:11:24.289378 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:24.789364257 +0000 UTC m=+140.402181963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.360338 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5zmrn" podStartSLOduration=121.360317985 podStartE2EDuration="2m1.360317985s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:24.359962417 +0000 UTC m=+139.972780133" watchObservedRunningTime="2025-12-04 06:11:24.360317985 +0000 UTC m=+139.973135691" Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.389789 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:24 crc kubenswrapper[4832]: E1204 06:11:24.390190 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:24.890174772 +0000 UTC m=+140.502992478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.491208 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:24 crc kubenswrapper[4832]: E1204 06:11:24.493789 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:24.993773975 +0000 UTC m=+140.606591681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.562746 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r" podStartSLOduration=120.562728844 podStartE2EDuration="2m0.562728844s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:24.561688958 +0000 UTC m=+140.174506674" watchObservedRunningTime="2025-12-04 06:11:24.562728844 +0000 UTC m=+140.175546550" Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.595767 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:24 crc kubenswrapper[4832]: E1204 06:11:24.596412 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:25.096363063 +0000 UTC m=+140.709180769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.596468 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:24 crc kubenswrapper[4832]: E1204 06:11:24.596964 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:25.096955357 +0000 UTC m=+140.709773063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.644920 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tw7nf" podStartSLOduration=121.644901409 podStartE2EDuration="2m1.644901409s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:24.609258501 +0000 UTC m=+140.222076207" watchObservedRunningTime="2025-12-04 06:11:24.644901409 +0000 UTC m=+140.257719115" Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.647768 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xzjgl" podStartSLOduration=120.647753909 podStartE2EDuration="2m0.647753909s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:24.643519855 +0000 UTC m=+140.256337581" watchObservedRunningTime="2025-12-04 06:11:24.647753909 +0000 UTC m=+140.260571615" Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.673433 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t"] Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.677589 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wr2lk"] Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.719164 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:24 crc kubenswrapper[4832]: E1204 06:11:24.731572 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:25.231538574 +0000 UTC m=+140.844356280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.795752 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-gcvsv" podStartSLOduration=120.795727976 podStartE2EDuration="2m0.795727976s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:24.704292473 +0000 UTC m=+140.317110209" watchObservedRunningTime="2025-12-04 06:11:24.795727976 +0000 UTC m=+140.408545682" Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.802666 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hvs6v" podStartSLOduration=120.802642156 podStartE2EDuration="2m0.802642156s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:24.732023826 +0000 UTC m=+140.344841532" watchObservedRunningTime="2025-12-04 06:11:24.802642156 +0000 UTC m=+140.415459862" Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.828632 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:24 crc kubenswrapper[4832]: E1204 06:11:24.828978 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:25.328964955 +0000 UTC m=+140.941782661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.872094 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x48rh" podStartSLOduration=120.872070958 podStartE2EDuration="2m0.872070958s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:24.87052756 +0000 UTC m=+140.483345266" watchObservedRunningTime="2025-12-04 06:11:24.872070958 +0000 UTC m=+140.484888664" Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.923741 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zknlt"] Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.924043 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vcj7x"] Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.924057 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4"] Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.924083 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wtnbm"] Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.933093 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:24 crc kubenswrapper[4832]: E1204 06:11:24.933656 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:25.433635935 +0000 UTC m=+141.046453641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.945465 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fw5rj"] Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.957100 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" event={"ID":"5b2ac879-133d-44de-8d0e-df502cc87c55","Type":"ContainerStarted","Data":"18fdbbe463709141a7b165eb1212c88973321ea8d8ac6ac35c62ed943a469cb4"} Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.957154 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" event={"ID":"5b2ac879-133d-44de-8d0e-df502cc87c55","Type":"ContainerStarted","Data":"79cfd6eb064b000cb33f9f41d7d4d43c07d3a33c38693f302735c1a4fed5de8c"} Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.978453 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj"] Dec 04 06:11:24 crc kubenswrapper[4832]: I1204 06:11:24.981302 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwnml" podStartSLOduration=120.981281609 podStartE2EDuration="2m0.981281609s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:24.947448296 +0000 UTC m=+140.560266002" watchObservedRunningTime="2025-12-04 06:11:24.981281609 +0000 UTC m=+140.594099315" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.002760 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zknlt" event={"ID":"ee453606-81c1-43d2-9121-c7a830f193cc","Type":"ContainerStarted","Data":"8b9633340c0fbe111b4121afe6e376b650719aace4cc0602b836a83a4573d24d"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.010149 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qzdgh"] Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.011338 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.012174 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" event={"ID":"48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3","Type":"ContainerStarted","Data":"e844e93ac0ac8fc7734a34cf89da8d694a9af0ee19f7599dc293908687f5aadf"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.016443 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.027540 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" podStartSLOduration=122.007367612 podStartE2EDuration="2m2.007367612s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:25.002593794 +0000 UTC m=+140.615411510" watchObservedRunningTime="2025-12-04 06:11:25.007367612 +0000 UTC m=+140.620185318" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.035079 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.035149 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f3ccce-259a-43f4-883e-a8f278c34053-catalog-content\") pod \"community-operators-qzdgh\" (UID: \"f0f3ccce-259a-43f4-883e-a8f278c34053\") " pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.035178 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnt66\" (UniqueName: \"kubernetes.io/projected/f0f3ccce-259a-43f4-883e-a8f278c34053-kube-api-access-xnt66\") pod \"community-operators-qzdgh\" (UID: \"f0f3ccce-259a-43f4-883e-a8f278c34053\") " pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.035264 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f3ccce-259a-43f4-883e-a8f278c34053-utilities\") pod \"community-operators-qzdgh\" (UID: \"f0f3ccce-259a-43f4-883e-a8f278c34053\") " pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:11:25 crc kubenswrapper[4832]: E1204 06:11:25.035784 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:25.535768812 +0000 UTC m=+141.148586528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.042801 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" event={"ID":"60021a99-658a-4bde-81c3-dae4f4870628","Type":"ContainerStarted","Data":"2b279a6a2d3a03d1767e31d274071ad53526e5c6089c537c3ea2ebe83f59033b"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.065238 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-62qg2" podStartSLOduration=121.065215678 podStartE2EDuration="2m1.065215678s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:25.048322182 +0000 UTC m=+140.661139878" watchObservedRunningTime="2025-12-04 06:11:25.065215678 +0000 UTC m=+140.678033384" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.067308 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9nf4f"] Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.068065 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-stxqv" event={"ID":"3ac11867-dffb-4aa1-88ba-d607d5d6f97a","Type":"ContainerStarted","Data":"8ea5ffaf15c1a2e1c3acf1b21e967896e927539080a4ca26a1ad034615612da3"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.086992 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cpzbl"] Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.094322 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t8dcw" event={"ID":"0bf57fcb-7a14-4523-90f9-0a62334537cf","Type":"ContainerStarted","Data":"7734ab297a8b8cd957bef4b89a1d2cf67cd134733110d50fa397696fc38c1b6e"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.094360 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t8dcw" event={"ID":"0bf57fcb-7a14-4523-90f9-0a62334537cf","Type":"ContainerStarted","Data":"05ac489271af784ccd8bf1ef29b8d048c2c9409883f2b8c6febd109483800f63"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.120053 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c7tjz" podStartSLOduration=121.120029129 podStartE2EDuration="2m1.120029129s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:25.08520074 +0000 UTC m=+140.698018466" watchObservedRunningTime="2025-12-04 06:11:25.120029129 +0000 UTC m=+140.732846835" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.123490 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qzdgh"] Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.142028 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.142259 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f3ccce-259a-43f4-883e-a8f278c34053-catalog-content\") pod \"community-operators-qzdgh\" (UID: \"f0f3ccce-259a-43f4-883e-a8f278c34053\") " pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.142289 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnt66\" (UniqueName: \"kubernetes.io/projected/f0f3ccce-259a-43f4-883e-a8f278c34053-kube-api-access-xnt66\") pod \"community-operators-qzdgh\" (UID: \"f0f3ccce-259a-43f4-883e-a8f278c34053\") " pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.142356 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f3ccce-259a-43f4-883e-a8f278c34053-utilities\") pod \"community-operators-qzdgh\" (UID: \"f0f3ccce-259a-43f4-883e-a8f278c34053\") " pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.142782 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" event={"ID":"bc09cb39-1b31-47c6-88c7-8c15d31c4960","Type":"ContainerStarted","Data":"0ea88904d6df2f24d9fbb56f9e2eb00fdb9cb068d94cde01ae7017ecadf11549"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.142825 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" event={"ID":"bc09cb39-1b31-47c6-88c7-8c15d31c4960","Type":"ContainerStarted","Data":"5b191cecca57e14ae43cac58d5d685a0e79df7d6829c4eecc59dcabae0f7ad02"} Dec 04 06:11:25 crc kubenswrapper[4832]: E1204 06:11:25.149562 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:25.649527846 +0000 UTC m=+141.262345562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.150416 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f3ccce-259a-43f4-883e-a8f278c34053-utilities\") pod \"community-operators-qzdgh\" (UID: \"f0f3ccce-259a-43f4-883e-a8f278c34053\") " pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.151019 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f3ccce-259a-43f4-883e-a8f278c34053-catalog-content\") pod \"community-operators-qzdgh\" (UID: \"f0f3ccce-259a-43f4-883e-a8f278c34053\") " pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.155197 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx"] Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.167026 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pqqsl"] Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.178601 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" event={"ID":"3c4dc6e5-57fe-454f-89e6-7c37768004b4","Type":"ContainerStarted","Data":"ed90b2ad1211da756fea196dfc7d7eb20223f80c45dc6c2812228ead889fa18c"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.189307 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-d2rth" podStartSLOduration=121.189278295 podStartE2EDuration="2m1.189278295s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:25.110067483 +0000 UTC m=+140.722885189" watchObservedRunningTime="2025-12-04 06:11:25.189278295 +0000 UTC m=+140.802096001" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.210237 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2thm" event={"ID":"50fb7e5f-0fc6-47d2-a953-8fece3489792","Type":"ContainerStarted","Data":"a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.222072 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mt7dw"] Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.222929 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mt7dw"] Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.223017 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.225605 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.230945 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnt66\" (UniqueName: \"kubernetes.io/projected/f0f3ccce-259a-43f4-883e-a8f278c34053-kube-api-access-xnt66\") pod \"community-operators-qzdgh\" (UID: \"f0f3ccce-259a-43f4-883e-a8f278c34053\") " pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.270045 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.273865 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:25 crc kubenswrapper[4832]: E1204 06:11:25.275332 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:25.775305276 +0000 UTC m=+141.388122982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.362105 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nbw42"] Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.363305 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.374034 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbw42"] Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.377343 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.377897 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb4072b-8c81-4808-b3a6-9be9fc814060-utilities\") pod \"certified-operators-mt7dw\" (UID: \"3eb4072b-8c81-4808-b3a6-9be9fc814060\") " pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.377956 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjm7q\" (UniqueName: \"kubernetes.io/projected/3eb4072b-8c81-4808-b3a6-9be9fc814060-kube-api-access-gjm7q\") pod \"certified-operators-mt7dw\" (UID: \"3eb4072b-8c81-4808-b3a6-9be9fc814060\") " pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.377973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb4072b-8c81-4808-b3a6-9be9fc814060-catalog-content\") pod \"certified-operators-mt7dw\" (UID: \"3eb4072b-8c81-4808-b3a6-9be9fc814060\") " pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:11:25 crc kubenswrapper[4832]: E1204 06:11:25.378635 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:25.878621211 +0000 UTC m=+141.491438917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.379238 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-snlqx" event={"ID":"5352eab3-5b2d-436a-9d0a-6627d7f4f3eb","Type":"ContainerStarted","Data":"69c8b3a25da8b0d2863e27f96cf17ca472d550ed706090219cbee2f9edb6e28c"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.379303 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-snlqx" event={"ID":"5352eab3-5b2d-436a-9d0a-6627d7f4f3eb","Type":"ContainerStarted","Data":"42a78b75713fa6028c2f31bff161c0bcd1c23aed3223dd44f33910dab71f4a21"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.386680 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-49ckx" podStartSLOduration=121.386659819 podStartE2EDuration="2m1.386659819s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:25.375963216 +0000 UTC m=+140.988780922" watchObservedRunningTime="2025-12-04 06:11:25.386659819 +0000 UTC m=+140.999477525" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.414777 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" event={"ID":"0ab7979f-0ea9-471f-a71a-75f869d58f14","Type":"ContainerStarted","Data":"2217989608a63794108231cf6e75f5b5094a3f9e68ad51173055b68c2b37421c"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.415746 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.423369 4832 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nmzfk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.423464 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" podUID="0ab7979f-0ea9-471f-a71a-75f869d58f14" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.444664 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg" podStartSLOduration=122.444627748 podStartE2EDuration="2m2.444627748s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:25.440736472 +0000 UTC m=+141.053554178" watchObservedRunningTime="2025-12-04 06:11:25.444627748 +0000 UTC m=+141.057445454" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.454987 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" event={"ID":"48194ca4-ce47-4309-a085-339c9b14f42b","Type":"ContainerStarted","Data":"89fe549c784629905495a05e5fb32e24b50bc61e3bc3a2ffccd8140b369f4596"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.468254 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" event={"ID":"d1bc185a-fac5-4103-947a-d3d660802249","Type":"ContainerStarted","Data":"d21da918ba497fef77640b463931850815dd050b28b74dec54104d698d3cbfaa"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.474984 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" event={"ID":"a68efa85-b0ea-4db2-9b73-0dc87b2c8328","Type":"ContainerStarted","Data":"559ab5626e57adde82c9486c434ebbbe64951b0f10c92bfaad345eb69a04baf1"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.478827 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" event={"ID":"e5bd8e08-cd81-4930-bbeb-3b7964c55cb5","Type":"ContainerStarted","Data":"de851020699aff65230bd6a3f06b17df9e307d6cd382abe18eb5ae77c2a673db"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.478877 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" event={"ID":"e5bd8e08-cd81-4930-bbeb-3b7964c55cb5","Type":"ContainerStarted","Data":"eeb22a9b09b1e89f9276f24324374f95ed2ef7094e52c7ddfdbddf7ce8dfcdd0"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.478930 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb4072b-8c81-4808-b3a6-9be9fc814060-utilities\") pod \"certified-operators-mt7dw\" (UID: \"3eb4072b-8c81-4808-b3a6-9be9fc814060\") " pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.478974 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkrwv\" (UniqueName: \"kubernetes.io/projected/29b70a28-7b0c-486b-a0f9-76e2e877cf26-kube-api-access-hkrwv\") pod \"community-operators-nbw42\" (UID: \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\") " pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.479031 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b70a28-7b0c-486b-a0f9-76e2e877cf26-catalog-content\") pod \"community-operators-nbw42\" (UID: \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\") " pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.479060 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjm7q\" (UniqueName: \"kubernetes.io/projected/3eb4072b-8c81-4808-b3a6-9be9fc814060-kube-api-access-gjm7q\") pod \"certified-operators-mt7dw\" (UID: \"3eb4072b-8c81-4808-b3a6-9be9fc814060\") " pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.479086 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb4072b-8c81-4808-b3a6-9be9fc814060-catalog-content\") pod \"certified-operators-mt7dw\" (UID: \"3eb4072b-8c81-4808-b3a6-9be9fc814060\") " pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.479152 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b70a28-7b0c-486b-a0f9-76e2e877cf26-utilities\") pod \"community-operators-nbw42\" (UID: \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\") " pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.479186 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:25 crc kubenswrapper[4832]: E1204 06:11:25.479577 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:25.979560379 +0000 UTC m=+141.592378165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.479745 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb4072b-8c81-4808-b3a6-9be9fc814060-utilities\") pod \"certified-operators-mt7dw\" (UID: \"3eb4072b-8c81-4808-b3a6-9be9fc814060\") " pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.480196 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb4072b-8c81-4808-b3a6-9be9fc814060-catalog-content\") pod \"certified-operators-mt7dw\" (UID: \"3eb4072b-8c81-4808-b3a6-9be9fc814060\") " pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.493499 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zzf4r" event={"ID":"d35e6baa-6315-48ee-904c-05da7d436283","Type":"ContainerStarted","Data":"62f7b52f40863b59e528f577788315e24dd855cf0c3a161263e852a51eb2c3cd"} Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.493536 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-tw7nf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.493589 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tw7nf" podUID="b9cd00db-0b78-4c09-8063-2c2bd201fe57" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.509250 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjm7q\" (UniqueName: \"kubernetes.io/projected/3eb4072b-8c81-4808-b3a6-9be9fc814060-kube-api-access-gjm7q\") pod \"certified-operators-mt7dw\" (UID: \"3eb4072b-8c81-4808-b3a6-9be9fc814060\") " pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.537742 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r6f6g" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.540062 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v2d94"] Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.541027 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.568081 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v2d94"] Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.581212 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.581500 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b70a28-7b0c-486b-a0f9-76e2e877cf26-catalog-content\") pod \"community-operators-nbw42\" (UID: \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\") " pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.581605 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b70a28-7b0c-486b-a0f9-76e2e877cf26-utilities\") pod \"community-operators-nbw42\" (UID: \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\") " pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.581895 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkrwv\" (UniqueName: \"kubernetes.io/projected/29b70a28-7b0c-486b-a0f9-76e2e877cf26-kube-api-access-hkrwv\") pod \"community-operators-nbw42\" (UID: \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\") " pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:11:25 crc kubenswrapper[4832]: E1204 06:11:25.582155 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:26.082136107 +0000 UTC m=+141.694953813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.583612 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b70a28-7b0c-486b-a0f9-76e2e877cf26-utilities\") pod \"community-operators-nbw42\" (UID: \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\") " pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.585545 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b70a28-7b0c-486b-a0f9-76e2e877cf26-catalog-content\") pod \"community-operators-nbw42\" (UID: \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\") " pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.621797 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.642067 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkrwv\" (UniqueName: \"kubernetes.io/projected/29b70a28-7b0c-486b-a0f9-76e2e877cf26-kube-api-access-hkrwv\") pod \"community-operators-nbw42\" (UID: \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\") " pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.688132 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adfeb9cb-8e12-4ba9-aafe-8753a774d720-catalog-content\") pod \"certified-operators-v2d94\" (UID: \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\") " pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.688341 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtl8g\" (UniqueName: \"kubernetes.io/projected/adfeb9cb-8e12-4ba9-aafe-8753a774d720-kube-api-access-qtl8g\") pod \"certified-operators-v2d94\" (UID: \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\") " pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.688536 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adfeb9cb-8e12-4ba9-aafe-8753a774d720-utilities\") pod \"certified-operators-v2d94\" (UID: \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\") " pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.688687 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.736682 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" podStartSLOduration=121.736662635 podStartE2EDuration="2m1.736662635s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:25.670807232 +0000 UTC m=+141.283624938" watchObservedRunningTime="2025-12-04 06:11:25.736662635 +0000 UTC m=+141.349480341" Dec 04 06:11:25 crc kubenswrapper[4832]: E1204 06:11:25.739401 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:26.239366732 +0000 UTC m=+141.852184438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.782910 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-g2thm" podStartSLOduration=122.782873344 podStartE2EDuration="2m2.782873344s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:25.781300696 +0000 UTC m=+141.394118422" watchObservedRunningTime="2025-12-04 06:11:25.782873344 +0000 UTC m=+141.395691050" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.791798 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.792121 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adfeb9cb-8e12-4ba9-aafe-8753a774d720-catalog-content\") pod \"certified-operators-v2d94\" (UID: \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\") " pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.792176 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtl8g\" (UniqueName: \"kubernetes.io/projected/adfeb9cb-8e12-4ba9-aafe-8753a774d720-kube-api-access-qtl8g\") pod \"certified-operators-v2d94\" (UID: \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\") " pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.792246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adfeb9cb-8e12-4ba9-aafe-8753a774d720-utilities\") pod \"certified-operators-v2d94\" (UID: \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\") " pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:11:25 crc kubenswrapper[4832]: E1204 06:11:25.793427 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:26.293410074 +0000 UTC m=+141.906227780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.797443 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adfeb9cb-8e12-4ba9-aafe-8753a774d720-utilities\") pod \"certified-operators-v2d94\" (UID: \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\") " pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.797787 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adfeb9cb-8e12-4ba9-aafe-8753a774d720-catalog-content\") pod \"certified-operators-v2d94\" (UID: \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\") " pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.836332 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtl8g\" (UniqueName: \"kubernetes.io/projected/adfeb9cb-8e12-4ba9-aafe-8753a774d720-kube-api-access-qtl8g\") pod \"certified-operators-v2d94\" (UID: \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\") " pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.851465 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.890998 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" podStartSLOduration=121.890977398 podStartE2EDuration="2m1.890977398s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:25.851800412 +0000 UTC m=+141.464618118" watchObservedRunningTime="2025-12-04 06:11:25.890977398 +0000 UTC m=+141.503795104" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.900040 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:25 crc kubenswrapper[4832]: E1204 06:11:25.900441 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:26.400426081 +0000 UTC m=+142.013243787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.926288 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jdgxv" podStartSLOduration=121.926273348 podStartE2EDuration="2m1.926273348s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:25.889666775 +0000 UTC m=+141.502484481" watchObservedRunningTime="2025-12-04 06:11:25.926273348 +0000 UTC m=+141.539091054" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.949453 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pqfp2" podStartSLOduration=122.949428178 podStartE2EDuration="2m2.949428178s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:25.925743995 +0000 UTC m=+141.538561701" watchObservedRunningTime="2025-12-04 06:11:25.949428178 +0000 UTC m=+141.562245884" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.953197 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.953371 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:25 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:25 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:25 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:25 crc kubenswrapper[4832]: I1204 06:11:25.995674 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.004421 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:26 crc kubenswrapper[4832]: E1204 06:11:26.004839 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:26.504822904 +0000 UTC m=+142.117640610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.023050 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.042153 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f77zm" podStartSLOduration=122.042135873 podStartE2EDuration="2m2.042135873s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:25.991058485 +0000 UTC m=+141.603876191" watchObservedRunningTime="2025-12-04 06:11:26.042135873 +0000 UTC m=+141.654953579" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.096305 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nbpnv" podStartSLOduration=122.096284708 podStartE2EDuration="2m2.096284708s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:26.089932322 +0000 UTC m=+141.702750038" watchObservedRunningTime="2025-12-04 06:11:26.096284708 +0000 UTC m=+141.709102414" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.117075 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:26 crc kubenswrapper[4832]: E1204 06:11:26.117589 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:26.617574672 +0000 UTC m=+142.230392378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.132584 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-snlqx" podStartSLOduration=7.132569032 podStartE2EDuration="7.132569032s" podCreationTimestamp="2025-12-04 06:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:26.129112077 +0000 UTC m=+141.741929813" watchObservedRunningTime="2025-12-04 06:11:26.132569032 +0000 UTC m=+141.745386738" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.217888 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:26 crc kubenswrapper[4832]: E1204 06:11:26.218614 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:26.718599722 +0000 UTC m=+142.331417428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.321501 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:26 crc kubenswrapper[4832]: E1204 06:11:26.322146 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:26.822134904 +0000 UTC m=+142.434952610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.423582 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:26 crc kubenswrapper[4832]: E1204 06:11:26.423934 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:26.923908292 +0000 UTC m=+142.536725998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.428939 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:26 crc kubenswrapper[4832]: E1204 06:11:26.429460 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:26.929446679 +0000 UTC m=+142.542264375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.513119 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qzdgh"] Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.530975 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:26 crc kubenswrapper[4832]: E1204 06:11:26.531312 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:27.031298279 +0000 UTC m=+142.644115985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.531591 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" event={"ID":"d1bc185a-fac5-4103-947a-d3d660802249","Type":"ContainerStarted","Data":"81abaebb50da5e7347514ea5a4711dc7c0f0157872f50947a4a24cd3b202adde"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.532835 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.534068 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mt7dw"] Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.545406 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" event={"ID":"5b2ac879-133d-44de-8d0e-df502cc87c55","Type":"ContainerStarted","Data":"8d7d97a790c276bbedaee29f79f9ab96d2adde3e3cb7b69d78e36f1e1bda3f05"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.550248 4832 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vcj7x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" start-of-body= Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.550606 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" podUID="d1bc185a-fac5-4103-947a-d3d660802249" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.560549 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" podStartSLOduration=123.560524589 podStartE2EDuration="2m3.560524589s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:26.557637238 +0000 UTC m=+142.170454954" watchObservedRunningTime="2025-12-04 06:11:26.560524589 +0000 UTC m=+142.173342295" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.564899 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" event={"ID":"3c4dc6e5-57fe-454f-89e6-7c37768004b4","Type":"ContainerStarted","Data":"247f77011654005e707b4e87fa1be4b8791f5412b2adbc415d981d1391f4d189"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.569109 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" event={"ID":"3e212703-f85d-4128-bbff-a3057263d6d3","Type":"ContainerStarted","Data":"e656a0fcac1abced1e4dbf2854fe2aaaac77aa942fa07ae743b6ea15adb484eb"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.569147 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" event={"ID":"3e212703-f85d-4128-bbff-a3057263d6d3","Type":"ContainerStarted","Data":"42f7799d8eb2b74ee28a7548dd6bf232eb0e19aa3e43b9d356765cebef5e258e"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.569972 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.574834 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" event={"ID":"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4","Type":"ContainerStarted","Data":"a976c968d01d5ad0ca265d98d9dab7fd8d48dd9e93b5b036a771e1a059296366"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.575858 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8lgmq" podStartSLOduration=123.575848037 podStartE2EDuration="2m3.575848037s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:26.571796067 +0000 UTC m=+142.184613773" watchObservedRunningTime="2025-12-04 06:11:26.575848037 +0000 UTC m=+142.188665743" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.581415 4832 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cpzbl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.581468 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" podUID="3e212703-f85d-4128-bbff-a3057263d6d3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.616255 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" podStartSLOduration=122.616232831 podStartE2EDuration="2m2.616232831s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:26.61411195 +0000 UTC m=+142.226929666" watchObservedRunningTime="2025-12-04 06:11:26.616232831 +0000 UTC m=+142.229050537" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.632861 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:26 crc kubenswrapper[4832]: E1204 06:11:26.647698 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:27.147599125 +0000 UTC m=+142.760416821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.652295 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbw42"] Dec 04 06:11:26 crc kubenswrapper[4832]: W1204 06:11:26.673189 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29b70a28_7b0c_486b_a0f9_76e2e877cf26.slice/crio-976d89c39a16597bf7445af2877f3fa3a73994d4ee0f7831a5e7afc12e1fd340 WatchSource:0}: Error finding container 976d89c39a16597bf7445af2877f3fa3a73994d4ee0f7831a5e7afc12e1fd340: Status 404 returned error can't find the container with id 976d89c39a16597bf7445af2877f3fa3a73994d4ee0f7831a5e7afc12e1fd340 Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.678791 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pqqsl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.680688 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" podUID="79d8eb21-a98b-45c5-9406-8e5d64e59fa0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.676504 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" event={"ID":"79d8eb21-a98b-45c5-9406-8e5d64e59fa0","Type":"ContainerStarted","Data":"7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.680911 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" event={"ID":"79d8eb21-a98b-45c5-9406-8e5d64e59fa0","Type":"ContainerStarted","Data":"d207888637ba149ce10589bcee220ff9ce0ef6d57d3b8be7b01880c2a45f8529"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.681011 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.698964 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sqv98" podStartSLOduration=122.69894851 podStartE2EDuration="2m2.69894851s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:26.676695682 +0000 UTC m=+142.289513378" watchObservedRunningTime="2025-12-04 06:11:26.69894851 +0000 UTC m=+142.311766216" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.699462 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" podStartSLOduration=122.699457353 podStartE2EDuration="2m2.699457353s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:26.698913079 +0000 UTC m=+142.311730785" watchObservedRunningTime="2025-12-04 06:11:26.699457353 +0000 UTC m=+142.312275059" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.703623 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t8dcw" event={"ID":"0bf57fcb-7a14-4523-90f9-0a62334537cf","Type":"ContainerStarted","Data":"b136f5416869363d5915a3cbe4c6bb26ebbc1e5610b87a67e6ef78d6d13c5c40"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.703873 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.734047 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:26 crc kubenswrapper[4832]: E1204 06:11:26.734383 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:27.234355473 +0000 UTC m=+142.847173179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.734594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:26 crc kubenswrapper[4832]: E1204 06:11:26.736217 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:27.236202428 +0000 UTC m=+142.849020134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.744212 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-t8dcw" podStartSLOduration=7.744195126 podStartE2EDuration="7.744195126s" podCreationTimestamp="2025-12-04 06:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:26.743652502 +0000 UTC m=+142.356470208" watchObservedRunningTime="2025-12-04 06:11:26.744195126 +0000 UTC m=+142.357012822" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.843311 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:26 crc kubenswrapper[4832]: E1204 06:11:26.844548 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:27.344520557 +0000 UTC m=+142.957338263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.867782 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:26 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:26 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:26 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.867833 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.869064 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-zknlt" podStartSLOduration=7.8690539919999996 podStartE2EDuration="7.869053992s" podCreationTimestamp="2025-12-04 06:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:26.868311483 +0000 UTC m=+142.481129189" watchObservedRunningTime="2025-12-04 06:11:26.869053992 +0000 UTC m=+142.481871698" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.900986 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" event={"ID":"a07eda47-4b27-4396-90a1-a6a1569a6f99","Type":"ContainerStarted","Data":"b536670a0cbc1d8cbae165957da7ff42e7c9b06d30542d333083e4e1bba8d3fb"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.901022 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jdgxv" event={"ID":"3bdc749a-22f3-4cb8-b987-04f7bc297cde","Type":"ContainerStarted","Data":"250574ced7a1c32e306834144d896ff2328a3f159fc844203ecca726fe46b613"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.901049 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nf4f" event={"ID":"95ed2e36-8e95-4012-be78-7f7e66d0349f","Type":"ContainerStarted","Data":"4764e269c648f9a36e9ecbb841f0511b4c570ba02031bcbedafb4989cb7eada0"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.901064 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nf4f" event={"ID":"95ed2e36-8e95-4012-be78-7f7e66d0349f","Type":"ContainerStarted","Data":"f9e73806fb8eb74797b5425712cdce253cbdddad3689bab40742366fe20046ab"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.901075 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zknlt" event={"ID":"ee453606-81c1-43d2-9121-c7a830f193cc","Type":"ContainerStarted","Data":"2814373401acf5de2c43a05199987cc82474b5455c9ce63d330434f71af20778"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.901093 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" event={"ID":"21e75fec-8174-41c0-82b1-a01786d46246","Type":"ContainerStarted","Data":"1d48dd0aaaf7eaffb6e2e5f79bb9cdfde78c752d2984e1eacc30678d2f13fa84"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.901102 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" event={"ID":"c64d9ec8-5ba3-455b-8932-d73b55863bf6","Type":"ContainerStarted","Data":"534389a0f3a697a58ad34a4acb257fa4e3371e608c6f5ffad477b25b68b6d2e2"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.901111 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" event={"ID":"c64d9ec8-5ba3-455b-8932-d73b55863bf6","Type":"ContainerStarted","Data":"fee595f59eef9242b5561667cbfe6bddf0c866e8b0cbe080bf3cd7203ea91f72"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.936006 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" event={"ID":"3d59fb7a-ef01-4919-8060-615a77afd343","Type":"ContainerStarted","Data":"d437418b39630900e2e9bdfaebf91d1dc14c3939fabe7e7e98018ce1da3dd54d"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.936235 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" event={"ID":"3d59fb7a-ef01-4919-8060-615a77afd343","Type":"ContainerStarted","Data":"c0885c5ffa2fff6e3234d31c336a144f896d7bc0f1a9bde5e613d3f73729dbea"} Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.937166 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.947762 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:26 crc kubenswrapper[4832]: E1204 06:11:26.952485 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:27.452464558 +0000 UTC m=+143.065282324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.968841 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-wr2lk" podStartSLOduration=122.96881937 podStartE2EDuration="2m2.96881937s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:26.932211638 +0000 UTC m=+142.545029364" watchObservedRunningTime="2025-12-04 06:11:26.96881937 +0000 UTC m=+142.581637076" Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.977996 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v2d94"] Dec 04 06:11:26 crc kubenswrapper[4832]: I1204 06:11:26.978771 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-stxqv" event={"ID":"3ac11867-dffb-4aa1-88ba-d607d5d6f97a","Type":"ContainerStarted","Data":"7e2fd33067ff0235dc22a83fde1dc8efbfed2f111aec0ed00af2b297238ba6eb"} Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.014251 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-766fs" event={"ID":"361acab0-1cd6-48fc-b6ef-c77dc3092f98","Type":"ContainerStarted","Data":"3942fb45d09486ca9b0cc1b597a8ccee14533da76b351ba9623077973e1d1c5b"} Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.014741 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-766fs" event={"ID":"361acab0-1cd6-48fc-b6ef-c77dc3092f98","Type":"ContainerStarted","Data":"48f4adbc2a6d1e38bc31e666547e7a7669190e857643d6cd9df5ff67c6284296"} Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.015880 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" podStartSLOduration=123.01586842 podStartE2EDuration="2m3.01586842s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:27.015169222 +0000 UTC m=+142.627986928" watchObservedRunningTime="2025-12-04 06:11:27.01586842 +0000 UTC m=+142.628686126" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.048413 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.048772 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:27.54875925 +0000 UTC m=+143.161576956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.048863 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.049872 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:27.549854007 +0000 UTC m=+143.162671763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.069089 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" event={"ID":"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce","Type":"ContainerStarted","Data":"4c4276fd8c70b9a82537b28ddb49f412a35b2585b1952e6f43bafcbf568b17c1"} Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.069143 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" event={"ID":"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce","Type":"ContainerStarted","Data":"2a7bd7ccfaae2cb776b3d5076a2c07b879f84e87cf27c048b620c359743725ea"} Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.079677 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-766fs" podStartSLOduration=123.079651502 podStartE2EDuration="2m3.079651502s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:27.07591586 +0000 UTC m=+142.688733586" watchObservedRunningTime="2025-12-04 06:11:27.079651502 +0000 UTC m=+142.692469208" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.091690 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-k95wg" event={"ID":"e57a2b10-8b23-4085-a031-3263b4265ccc","Type":"ContainerStarted","Data":"d8389678c8baf8b08515d29e67b1fe467ef80086e70019b60c32a12401e96470"} Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.096139 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-stxqv" podStartSLOduration=123.096126018 podStartE2EDuration="2m3.096126018s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:27.094452027 +0000 UTC m=+142.707269723" watchObservedRunningTime="2025-12-04 06:11:27.096126018 +0000 UTC m=+142.708943724" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.118840 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" event={"ID":"48eb0cd6-a766-4065-a4b4-d7d4bbe32bf3","Type":"ContainerStarted","Data":"105829aade71d1fc1b30acbe0273f72f90a15a76b913b26c7a25c9b7e7f5b61e"} Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.122928 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.124052 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-tw7nf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.124088 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tw7nf" podUID="b9cd00db-0b78-4c09-8063-2c2bd201fe57" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.137621 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nmzfk" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.139406 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.139437 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" podStartSLOduration=123.139427915 podStartE2EDuration="2m3.139427915s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:27.137823646 +0000 UTC m=+142.750641362" watchObservedRunningTime="2025-12-04 06:11:27.139427915 +0000 UTC m=+142.752245621" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.150097 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.150749 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:27.650730113 +0000 UTC m=+143.263547819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.162939 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bs4tj"] Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.164240 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.170430 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.234983 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bs4tj"] Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.254640 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-catalog-content\") pod \"redhat-marketplace-bs4tj\" (UID: \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\") " pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.255478 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-utilities\") pod \"redhat-marketplace-bs4tj\" (UID: \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\") " pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.255541 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.255786 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnw28\" (UniqueName: \"kubernetes.io/projected/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-kube-api-access-vnw28\") pod \"redhat-marketplace-bs4tj\" (UID: \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\") " pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.259917 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:27.759896434 +0000 UTC m=+143.372714140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.273372 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8ft9t" podStartSLOduration=123.273353916 podStartE2EDuration="2m3.273353916s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:27.271834549 +0000 UTC m=+142.884652245" watchObservedRunningTime="2025-12-04 06:11:27.273353916 +0000 UTC m=+142.886171622" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.366192 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.366332 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnw28\" (UniqueName: \"kubernetes.io/projected/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-kube-api-access-vnw28\") pod \"redhat-marketplace-bs4tj\" (UID: \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\") " pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.366544 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-catalog-content\") pod \"redhat-marketplace-bs4tj\" (UID: \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\") " pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.366572 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-utilities\") pod \"redhat-marketplace-bs4tj\" (UID: \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\") " pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.366893 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:27.866878051 +0000 UTC m=+143.479695757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.369030 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-utilities\") pod \"redhat-marketplace-bs4tj\" (UID: \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\") " pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.369601 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-catalog-content\") pod \"redhat-marketplace-bs4tj\" (UID: \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\") " pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.392418 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnw28\" (UniqueName: \"kubernetes.io/projected/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-kube-api-access-vnw28\") pod \"redhat-marketplace-bs4tj\" (UID: \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\") " pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.425002 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.471172 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.471776 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:27.971762945 +0000 UTC m=+143.584580651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.536282 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qss9l"] Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.537253 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.553305 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qss9l"] Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.572226 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.572590 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.07257567 +0000 UTC m=+143.685393376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.572632 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.572680 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3733728-9f9d-45f8-af66-d0427bb7cfe1-catalog-content\") pod \"redhat-marketplace-qss9l\" (UID: \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\") " pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.572714 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pwh8\" (UniqueName: \"kubernetes.io/projected/e3733728-9f9d-45f8-af66-d0427bb7cfe1-kube-api-access-4pwh8\") pod \"redhat-marketplace-qss9l\" (UID: \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\") " pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.572746 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3733728-9f9d-45f8-af66-d0427bb7cfe1-utilities\") pod \"redhat-marketplace-qss9l\" (UID: \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\") " pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.573011 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.073004891 +0000 UTC m=+143.685822597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.673866 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.674099 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.174072841 +0000 UTC m=+143.786890537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.674467 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pwh8\" (UniqueName: \"kubernetes.io/projected/e3733728-9f9d-45f8-af66-d0427bb7cfe1-kube-api-access-4pwh8\") pod \"redhat-marketplace-qss9l\" (UID: \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\") " pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.674508 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3733728-9f9d-45f8-af66-d0427bb7cfe1-utilities\") pod \"redhat-marketplace-qss9l\" (UID: \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\") " pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.674581 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.674617 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3733728-9f9d-45f8-af66-d0427bb7cfe1-catalog-content\") pod \"redhat-marketplace-qss9l\" (UID: \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\") " pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.674981 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3733728-9f9d-45f8-af66-d0427bb7cfe1-catalog-content\") pod \"redhat-marketplace-qss9l\" (UID: \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\") " pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.675019 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3733728-9f9d-45f8-af66-d0427bb7cfe1-utilities\") pod \"redhat-marketplace-qss9l\" (UID: \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\") " pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.675220 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.17520807 +0000 UTC m=+143.788025876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.717289 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pwh8\" (UniqueName: \"kubernetes.io/projected/e3733728-9f9d-45f8-af66-d0427bb7cfe1-kube-api-access-4pwh8\") pod \"redhat-marketplace-qss9l\" (UID: \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\") " pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.777682 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.778281 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.278261929 +0000 UTC m=+143.891079645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.854146 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:27 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:27 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:27 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.854201 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.862332 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.886175 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.886523 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.386511057 +0000 UTC m=+143.999328763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.987992 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.988559 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.488540361 +0000 UTC m=+144.101358067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:27 crc kubenswrapper[4832]: I1204 06:11:27.988785 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:27 crc kubenswrapper[4832]: E1204 06:11:27.989132 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.489113235 +0000 UTC m=+144.101930941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.047161 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bs4tj"] Dec 04 06:11:28 crc kubenswrapper[4832]: W1204 06:11:28.060462 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b864fd5_dc62_4f52_b7e7_1bdeeca4e88e.slice/crio-1783ccfb311ca9d4caa2a9b99d869056688f4227aceff0c703de5904092ee4fb WatchSource:0}: Error finding container 1783ccfb311ca9d4caa2a9b99d869056688f4227aceff0c703de5904092ee4fb: Status 404 returned error can't find the container with id 1783ccfb311ca9d4caa2a9b99d869056688f4227aceff0c703de5904092ee4fb Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.089588 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:28 crc kubenswrapper[4832]: E1204 06:11:28.089738 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.589713605 +0000 UTC m=+144.202531311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.089809 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:28 crc kubenswrapper[4832]: E1204 06:11:28.090189 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.590171986 +0000 UTC m=+144.202989742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.158312 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j295k"] Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.161589 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.167677 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" event={"ID":"3d59fb7a-ef01-4919-8060-615a77afd343","Type":"ContainerStarted","Data":"8d656b4705da58c6b491ffaa7af3c8b02dc7a95f5eb02c6ea3970e313579b32c"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.167956 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.172743 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j295k"] Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.190909 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:28 crc kubenswrapper[4832]: E1204 06:11:28.191454 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.691431091 +0000 UTC m=+144.304248797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.197869 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fjbm4" event={"ID":"2a87e0a5-dd3c-4eff-ac47-1de54c7f07ce","Type":"ContainerStarted","Data":"1c454f9ff2d0280f54322203bdd829f3e2a3115a1ad8bde0b610dde267c62194"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.208028 4832 generic.go:334] "Generic (PLEG): container finished" podID="3eb4072b-8c81-4808-b3a6-9be9fc814060" containerID="a985c698ec807942cd63f9ad68cb4c8aad093f11b630b623ac8b6ecc08486912" exitCode=0 Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.208740 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mt7dw" event={"ID":"3eb4072b-8c81-4808-b3a6-9be9fc814060","Type":"ContainerDied","Data":"a985c698ec807942cd63f9ad68cb4c8aad093f11b630b623ac8b6ecc08486912"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.208779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mt7dw" event={"ID":"3eb4072b-8c81-4808-b3a6-9be9fc814060","Type":"ContainerStarted","Data":"791c98be740fc475ba83b61122ccda8a63de3a0f38990091b64a717b3d4c260f"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.221004 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.239850 4832 generic.go:334] "Generic (PLEG): container finished" podID="21e75fec-8174-41c0-82b1-a01786d46246" containerID="db16ce4b0c173729faae11a72b9098f2eff6cd5961d699a82b0ca5d2b99e08d9" exitCode=0 Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.239984 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" event={"ID":"21e75fec-8174-41c0-82b1-a01786d46246","Type":"ContainerDied","Data":"db16ce4b0c173729faae11a72b9098f2eff6cd5961d699a82b0ca5d2b99e08d9"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.250759 4832 generic.go:334] "Generic (PLEG): container finished" podID="a07eda47-4b27-4396-90a1-a6a1569a6f99" containerID="689a64beebe9e0f0476d30d1e0e4af321ad25dfa88799507640e725534769a3e" exitCode=0 Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.250833 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" event={"ID":"a07eda47-4b27-4396-90a1-a6a1569a6f99","Type":"ContainerDied","Data":"689a64beebe9e0f0476d30d1e0e4af321ad25dfa88799507640e725534769a3e"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.250884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" event={"ID":"a07eda47-4b27-4396-90a1-a6a1569a6f99","Type":"ContainerStarted","Data":"f6c496dd3ebe74634d605c872e5af17dd58c2a5c3d07dc6ed02fc8052c451faa"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.256569 4832 generic.go:334] "Generic (PLEG): container finished" podID="adfeb9cb-8e12-4ba9-aafe-8753a774d720" containerID="1bed0c57c85c91c405a1e46509cea3e3e604ff136a9eeab74eaa2379c2306e1a" exitCode=0 Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.256625 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2d94" event={"ID":"adfeb9cb-8e12-4ba9-aafe-8753a774d720","Type":"ContainerDied","Data":"1bed0c57c85c91c405a1e46509cea3e3e604ff136a9eeab74eaa2379c2306e1a"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.256649 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2d94" event={"ID":"adfeb9cb-8e12-4ba9-aafe-8753a774d720","Type":"ContainerStarted","Data":"bc44b36800ea4b6a398f8c8e8db7f7ac23f2a3b2c5dc89246c3809950f282e4f"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.265370 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" event={"ID":"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4","Type":"ContainerStarted","Data":"967d371fcce4c0d216e9a9ce92b2b514312b6fe797c87c65f3b8232d64f7559f"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.295793 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-utilities\") pod \"redhat-operators-j295k\" (UID: \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\") " pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.295874 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-catalog-content\") pod \"redhat-operators-j295k\" (UID: \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\") " pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.295948 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqjwf\" (UniqueName: \"kubernetes.io/projected/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-kube-api-access-dqjwf\") pod \"redhat-operators-j295k\" (UID: \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\") " pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.296028 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.297132 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nf4f" event={"ID":"95ed2e36-8e95-4012-be78-7f7e66d0349f","Type":"ContainerStarted","Data":"85f9adb6361aac8e35d0a858c2758a0ab5eb68d3f213b312135a5580c309c221"} Dec 04 06:11:28 crc kubenswrapper[4832]: E1204 06:11:28.299590 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.799575697 +0000 UTC m=+144.412393403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.309354 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bs4tj" event={"ID":"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e","Type":"ContainerStarted","Data":"1783ccfb311ca9d4caa2a9b99d869056688f4227aceff0c703de5904092ee4fb"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.343628 4832 generic.go:334] "Generic (PLEG): container finished" podID="bc09cb39-1b31-47c6-88c7-8c15d31c4960" containerID="0ea88904d6df2f24d9fbb56f9e2eb00fdb9cb068d94cde01ae7017ecadf11549" exitCode=0 Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.343642 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" event={"ID":"bc09cb39-1b31-47c6-88c7-8c15d31c4960","Type":"ContainerDied","Data":"0ea88904d6df2f24d9fbb56f9e2eb00fdb9cb068d94cde01ae7017ecadf11549"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.364750 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" podStartSLOduration=124.364733322 podStartE2EDuration="2m4.364733322s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:28.323682291 +0000 UTC m=+143.936500017" watchObservedRunningTime="2025-12-04 06:11:28.364733322 +0000 UTC m=+143.977551028" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.365418 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qss9l"] Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.365715 4832 generic.go:334] "Generic (PLEG): container finished" podID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" containerID="65e3a4c5c32690ccb40c5991ec7a78df39a53415da62e00c4b4cf870d767f66f" exitCode=0 Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.365973 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbw42" event={"ID":"29b70a28-7b0c-486b-a0f9-76e2e877cf26","Type":"ContainerDied","Data":"65e3a4c5c32690ccb40c5991ec7a78df39a53415da62e00c4b4cf870d767f66f"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.366018 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbw42" event={"ID":"29b70a28-7b0c-486b-a0f9-76e2e877cf26","Type":"ContainerStarted","Data":"976d89c39a16597bf7445af2877f3fa3a73994d4ee0f7831a5e7afc12e1fd340"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.382636 4832 generic.go:334] "Generic (PLEG): container finished" podID="f0f3ccce-259a-43f4-883e-a8f278c34053" containerID="ee516929c1df112f8fa669da539baa8962af6c16ff580f44183471dc5a7db2d4" exitCode=0 Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.384517 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzdgh" event={"ID":"f0f3ccce-259a-43f4-883e-a8f278c34053","Type":"ContainerDied","Data":"ee516929c1df112f8fa669da539baa8962af6c16ff580f44183471dc5a7db2d4"} Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.400775 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzdgh" event={"ID":"f0f3ccce-259a-43f4-883e-a8f278c34053","Type":"ContainerStarted","Data":"77964a5a79e5cca74f6db526e54045737a0396fd26e5a1ab78a2091e1c3d0654"} Dec 04 06:11:28 crc kubenswrapper[4832]: W1204 06:11:28.391602 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3733728_9f9d_45f8_af66_d0427bb7cfe1.slice/crio-13a03cab2480d599603fe429af2240e0c41f941f4534195b18ed146a0f233e3d WatchSource:0}: Error finding container 13a03cab2480d599603fe429af2240e0c41f941f4534195b18ed146a0f233e3d: Status 404 returned error can't find the container with id 13a03cab2480d599603fe429af2240e0c41f941f4534195b18ed146a0f233e3d Dec 04 06:11:28 crc kubenswrapper[4832]: E1204 06:11:28.396792 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.896773562 +0000 UTC m=+144.509591268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.396722 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.401138 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-utilities\") pod \"redhat-operators-j295k\" (UID: \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\") " pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.401219 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-catalog-content\") pod \"redhat-operators-j295k\" (UID: \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\") " pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.401277 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqjwf\" (UniqueName: \"kubernetes.io/projected/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-kube-api-access-dqjwf\") pod \"redhat-operators-j295k\" (UID: \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\") " pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.401324 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.402368 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-utilities\") pod \"redhat-operators-j295k\" (UID: \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\") " pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:11:28 crc kubenswrapper[4832]: E1204 06:11:28.402559 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:28.902544834 +0000 UTC m=+144.515362540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.403458 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-catalog-content\") pod \"redhat-operators-j295k\" (UID: \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\") " pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.406324 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pqqsl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.406354 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" podUID="79d8eb21-a98b-45c5-9406-8e5d64e59fa0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.413325 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.434970 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.444475 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqjwf\" (UniqueName: \"kubernetes.io/projected/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-kube-api-access-dqjwf\") pod \"redhat-operators-j295k\" (UID: \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\") " pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.499548 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9nf4f" podStartSLOduration=124.499527894 podStartE2EDuration="2m4.499527894s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:28.469668068 +0000 UTC m=+144.082485774" watchObservedRunningTime="2025-12-04 06:11:28.499527894 +0000 UTC m=+144.112345600" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.503409 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:28 crc kubenswrapper[4832]: E1204 06:11:28.505269 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:29.005247956 +0000 UTC m=+144.618065662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.514114 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.573934 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vlppg"] Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.575290 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vlppg"] Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.575410 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.607854 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.607915 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abbca7a-e500-4733-9160-2a34dcfa0531-utilities\") pod \"redhat-operators-vlppg\" (UID: \"1abbca7a-e500-4733-9160-2a34dcfa0531\") " pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.607959 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bwcq\" (UniqueName: \"kubernetes.io/projected/1abbca7a-e500-4733-9160-2a34dcfa0531-kube-api-access-7bwcq\") pod \"redhat-operators-vlppg\" (UID: \"1abbca7a-e500-4733-9160-2a34dcfa0531\") " pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.607984 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abbca7a-e500-4733-9160-2a34dcfa0531-catalog-content\") pod \"redhat-operators-vlppg\" (UID: \"1abbca7a-e500-4733-9160-2a34dcfa0531\") " pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:11:28 crc kubenswrapper[4832]: E1204 06:11:28.608427 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:29.108377087 +0000 UTC m=+144.721194853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.718962 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.719600 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bwcq\" (UniqueName: \"kubernetes.io/projected/1abbca7a-e500-4733-9160-2a34dcfa0531-kube-api-access-7bwcq\") pod \"redhat-operators-vlppg\" (UID: \"1abbca7a-e500-4733-9160-2a34dcfa0531\") " pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.719652 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abbca7a-e500-4733-9160-2a34dcfa0531-catalog-content\") pod \"redhat-operators-vlppg\" (UID: \"1abbca7a-e500-4733-9160-2a34dcfa0531\") " pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.719783 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abbca7a-e500-4733-9160-2a34dcfa0531-utilities\") pod \"redhat-operators-vlppg\" (UID: \"1abbca7a-e500-4733-9160-2a34dcfa0531\") " pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.720278 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abbca7a-e500-4733-9160-2a34dcfa0531-utilities\") pod \"redhat-operators-vlppg\" (UID: \"1abbca7a-e500-4733-9160-2a34dcfa0531\") " pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:11:28 crc kubenswrapper[4832]: E1204 06:11:28.720366 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:29.220347086 +0000 UTC m=+144.833164802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.720943 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abbca7a-e500-4733-9160-2a34dcfa0531-catalog-content\") pod \"redhat-operators-vlppg\" (UID: \"1abbca7a-e500-4733-9160-2a34dcfa0531\") " pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.782020 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bwcq\" (UniqueName: \"kubernetes.io/projected/1abbca7a-e500-4733-9160-2a34dcfa0531-kube-api-access-7bwcq\") pod \"redhat-operators-vlppg\" (UID: \"1abbca7a-e500-4733-9160-2a34dcfa0531\") " pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.824962 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:28 crc kubenswrapper[4832]: E1204 06:11:28.825862 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:29.325819496 +0000 UTC m=+144.938637202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.855263 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:28 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:28 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:28 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.855372 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:28 crc kubenswrapper[4832]: I1204 06:11:28.925764 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:28 crc kubenswrapper[4832]: E1204 06:11:28.926270 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:29.426255661 +0000 UTC m=+145.039073367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.013707 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.037314 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:29 crc kubenswrapper[4832]: E1204 06:11:29.037745 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:29.537724888 +0000 UTC m=+145.150542674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.119907 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j295k"] Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.141111 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:29 crc kubenswrapper[4832]: E1204 06:11:29.141563 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:29.641547587 +0000 UTC m=+145.254365293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.215747 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.216405 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.227081 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.227450 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.229782 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.243822 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb8cf355-6567-45c5-a82f-29ef50c27d15-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cb8cf355-6567-45c5-a82f-29ef50c27d15\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.243856 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb8cf355-6567-45c5-a82f-29ef50c27d15-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cb8cf355-6567-45c5-a82f-29ef50c27d15\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.243930 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:29 crc kubenswrapper[4832]: E1204 06:11:29.244204 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:29.744191906 +0000 UTC m=+145.357009612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.350344 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:29 crc kubenswrapper[4832]: E1204 06:11:29.350854 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:29.850827664 +0000 UTC m=+145.463645370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.351122 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.351221 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb8cf355-6567-45c5-a82f-29ef50c27d15-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cb8cf355-6567-45c5-a82f-29ef50c27d15\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.351242 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb8cf355-6567-45c5-a82f-29ef50c27d15-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cb8cf355-6567-45c5-a82f-29ef50c27d15\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.351363 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb8cf355-6567-45c5-a82f-29ef50c27d15-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cb8cf355-6567-45c5-a82f-29ef50c27d15\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: E1204 06:11:29.352860 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:29.852843874 +0000 UTC m=+145.465661570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.380711 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb8cf355-6567-45c5-a82f-29ef50c27d15-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cb8cf355-6567-45c5-a82f-29ef50c27d15\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.455400 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:29 crc kubenswrapper[4832]: E1204 06:11:29.455675 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:29.955661308 +0000 UTC m=+145.568479014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.474804 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j295k" event={"ID":"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930","Type":"ContainerStarted","Data":"1afdd0de8e64a7a080681049c21ee7f5271c3aa947d717f14ddff42de60762f7"} Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.498699 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.499342 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.503044 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.503332 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.520711 4832 generic.go:334] "Generic (PLEG): container finished" podID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" containerID="ddf01246bd73bb3fe44eb075f21a082205589ce1d299d2851f8afe5e59fde528" exitCode=0 Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.520819 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qss9l" event={"ID":"e3733728-9f9d-45f8-af66-d0427bb7cfe1","Type":"ContainerDied","Data":"ddf01246bd73bb3fe44eb075f21a082205589ce1d299d2851f8afe5e59fde528"} Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.520851 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qss9l" event={"ID":"e3733728-9f9d-45f8-af66-d0427bb7cfe1","Type":"ContainerStarted","Data":"13a03cab2480d599603fe429af2240e0c41f941f4534195b18ed146a0f233e3d"} Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.524864 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.550106 4832 generic.go:334] "Generic (PLEG): container finished" podID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" containerID="1fb75ed2dedc13f1ede04caca3d2475e9a0802ab60d6abba82f1b7778efc29ff" exitCode=0 Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.550436 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bs4tj" event={"ID":"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e","Type":"ContainerDied","Data":"1fb75ed2dedc13f1ede04caca3d2475e9a0802ab60d6abba82f1b7778efc29ff"} Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.573565 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013b0d7b-05df-413f-8665-73c5f0cfcc42-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"013b0d7b-05df-413f-8665-73c5f0cfcc42\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.573614 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013b0d7b-05df-413f-8665-73c5f0cfcc42-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"013b0d7b-05df-413f-8665-73c5f0cfcc42\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.573720 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:29 crc kubenswrapper[4832]: E1204 06:11:29.574013 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:30.074002635 +0000 UTC m=+145.686820341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.589755 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.610184 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" event={"ID":"21e75fec-8174-41c0-82b1-a01786d46246","Type":"ContainerStarted","Data":"1308862b2d27318e9da2884d326e13a393aa56e5ec9503dae1b07ff8d8a55f1e"} Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.675970 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.676246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013b0d7b-05df-413f-8665-73c5f0cfcc42-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"013b0d7b-05df-413f-8665-73c5f0cfcc42\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.676271 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013b0d7b-05df-413f-8665-73c5f0cfcc42-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"013b0d7b-05df-413f-8665-73c5f0cfcc42\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.676414 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013b0d7b-05df-413f-8665-73c5f0cfcc42-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"013b0d7b-05df-413f-8665-73c5f0cfcc42\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: E1204 06:11:29.676655 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 06:11:30.176632814 +0000 UTC m=+145.789450520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.690092 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vlppg"] Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.705828 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013b0d7b-05df-413f-8665-73c5f0cfcc42-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"013b0d7b-05df-413f-8665-73c5f0cfcc42\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.728788 4832 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 04 06:11:29 crc kubenswrapper[4832]: W1204 06:11:29.737169 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1abbca7a_e500_4733_9160_2a34dcfa0531.slice/crio-e18e69d262e4505a81ad9cf49cf85460a802aac7a9be32834405cff70c2f1f1d WatchSource:0}: Error finding container e18e69d262e4505a81ad9cf49cf85460a802aac7a9be32834405cff70c2f1f1d: Status 404 returned error can't find the container with id e18e69d262e4505a81ad9cf49cf85460a802aac7a9be32834405cff70c2f1f1d Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.803039 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:29 crc kubenswrapper[4832]: E1204 06:11:29.804995 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 06:11:30.304979287 +0000 UTC m=+145.917796983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9chqb" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.844173 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:29 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:29 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:29 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.844767 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.894491 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.894550 4832 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-04T06:11:29.728811239Z","Handler":null,"Name":""} Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.903129 4832 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.903165 4832 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.904042 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 06:11:29 crc kubenswrapper[4832]: I1204 06:11:29.955093 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.006543 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.014883 4832 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.014919 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.063528 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9chqb\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.135815 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.183649 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.213272 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.320517 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssdft\" (UniqueName: \"kubernetes.io/projected/bc09cb39-1b31-47c6-88c7-8c15d31c4960-kube-api-access-ssdft\") pod \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\" (UID: \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\") " Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.320696 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc09cb39-1b31-47c6-88c7-8c15d31c4960-secret-volume\") pod \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\" (UID: \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\") " Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.320747 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc09cb39-1b31-47c6-88c7-8c15d31c4960-config-volume\") pod \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\" (UID: \"bc09cb39-1b31-47c6-88c7-8c15d31c4960\") " Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.321654 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc09cb39-1b31-47c6-88c7-8c15d31c4960-config-volume" (OuterVolumeSpecName: "config-volume") pod "bc09cb39-1b31-47c6-88c7-8c15d31c4960" (UID: "bc09cb39-1b31-47c6-88c7-8c15d31c4960"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.340786 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc09cb39-1b31-47c6-88c7-8c15d31c4960-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bc09cb39-1b31-47c6-88c7-8c15d31c4960" (UID: "bc09cb39-1b31-47c6-88c7-8c15d31c4960"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.346034 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc09cb39-1b31-47c6-88c7-8c15d31c4960-kube-api-access-ssdft" (OuterVolumeSpecName: "kube-api-access-ssdft") pod "bc09cb39-1b31-47c6-88c7-8c15d31c4960" (UID: "bc09cb39-1b31-47c6-88c7-8c15d31c4960"). InnerVolumeSpecName "kube-api-access-ssdft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.422177 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssdft\" (UniqueName: \"kubernetes.io/projected/bc09cb39-1b31-47c6-88c7-8c15d31c4960-kube-api-access-ssdft\") on node \"crc\" DevicePath \"\"" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.422211 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc09cb39-1b31-47c6-88c7-8c15d31c4960-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.422222 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc09cb39-1b31-47c6-88c7-8c15d31c4960-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.526052 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.537991 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.595585 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 04 06:11:30 crc kubenswrapper[4832]: W1204 06:11:30.609661 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod013b0d7b_05df_413f_8665_73c5f0cfcc42.slice/crio-7881cda049176fcbd33a64d8cb3a3529665704c2405212dc01e83f63b0f8019d WatchSource:0}: Error finding container 7881cda049176fcbd33a64d8cb3a3529665704c2405212dc01e83f63b0f8019d: Status 404 returned error can't find the container with id 7881cda049176fcbd33a64d8cb3a3529665704c2405212dc01e83f63b0f8019d Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.616586 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cb8cf355-6567-45c5-a82f-29ef50c27d15","Type":"ContainerStarted","Data":"4d81ec0a6efd518b1d5743ceb6ac7ca4c06da129cd522653380031a048e1d58e"} Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.623037 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" event={"ID":"bc09cb39-1b31-47c6-88c7-8c15d31c4960","Type":"ContainerDied","Data":"5b191cecca57e14ae43cac58d5d685a0e79df7d6829c4eecc59dcabae0f7ad02"} Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.623108 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b191cecca57e14ae43cac58d5d685a0e79df7d6829c4eecc59dcabae0f7ad02" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.623206 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.627313 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.628228 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.633766 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"013b0d7b-05df-413f-8665-73c5f0cfcc42","Type":"ContainerStarted","Data":"7881cda049176fcbd33a64d8cb3a3529665704c2405212dc01e83f63b0f8019d"} Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.642853 4832 generic.go:334] "Generic (PLEG): container finished" podID="1abbca7a-e500-4733-9160-2a34dcfa0531" containerID="7d3a862769076e22a2c54a1abdcde4a1a0263260ac14f8facd2df7ff1e451322" exitCode=0 Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.643125 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlppg" event={"ID":"1abbca7a-e500-4733-9160-2a34dcfa0531","Type":"ContainerDied","Data":"7d3a862769076e22a2c54a1abdcde4a1a0263260ac14f8facd2df7ff1e451322"} Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.643150 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlppg" event={"ID":"1abbca7a-e500-4733-9160-2a34dcfa0531","Type":"ContainerStarted","Data":"e18e69d262e4505a81ad9cf49cf85460a802aac7a9be32834405cff70c2f1f1d"} Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.666943 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" event={"ID":"21e75fec-8174-41c0-82b1-a01786d46246","Type":"ContainerStarted","Data":"91a2d1b70eda371b2e7a5606fc06a6a8cde074bf580e824741bc5f71d732b9ca"} Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.677475 4832 generic.go:334] "Generic (PLEG): container finished" podID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" containerID="c30bcba510b0b4dcac4e8073d3bf69d15169021fa231008f5530a1e190c4647e" exitCode=0 Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.677581 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j295k" event={"ID":"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930","Type":"ContainerDied","Data":"c30bcba510b0b4dcac4e8073d3bf69d15169021fa231008f5530a1e190c4647e"} Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.695107 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" event={"ID":"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4","Type":"ContainerStarted","Data":"f8e949f43b1e283cdefba5d0055e5ec8fd35ac14c9f7b6eccd49039cf32848f2"} Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.719523 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" podStartSLOduration=127.719498095 podStartE2EDuration="2m7.719498095s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:30.688672486 +0000 UTC m=+146.301490192" watchObservedRunningTime="2025-12-04 06:11:30.719498095 +0000 UTC m=+146.332315801" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.730716 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.730980 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.737237 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.738703 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.776212 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.832780 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.843209 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:30 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:30 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:30 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.843296 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.848475 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.857379 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9chqb"] Dec 04 06:11:30 crc kubenswrapper[4832]: I1204 06:11:30.858903 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 06:11:31 crc kubenswrapper[4832]: W1204 06:11:31.009261 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9f05718_aaf5_41f3_94b2_026b8eb39474.slice/crio-1fe678d65024cd4f18f476e40d7cf4127e1f1bff9c5903deecf2e20ddfab855d WatchSource:0}: Error finding container 1fe678d65024cd4f18f476e40d7cf4127e1f1bff9c5903deecf2e20ddfab855d: Status 404 returned error can't find the container with id 1fe678d65024cd4f18f476e40d7cf4127e1f1bff9c5903deecf2e20ddfab855d Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.417051 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-tw7nf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.417099 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tw7nf" podUID="b9cd00db-0b78-4c09-8063-2c2bd201fe57" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.417241 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-tw7nf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.417311 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tw7nf" podUID="b9cd00db-0b78-4c09-8063-2c2bd201fe57" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 04 06:11:31 crc kubenswrapper[4832]: W1204 06:11:31.417421 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-6cba01e987e1baae4902140b5b3f6103c105d12103836974dc30abee309af6f7 WatchSource:0}: Error finding container 6cba01e987e1baae4902140b5b3f6103c105d12103836974dc30abee309af6f7: Status 404 returned error can't find the container with id 6cba01e987e1baae4902140b5b3f6103c105d12103836974dc30abee309af6f7 Dec 04 06:11:31 crc kubenswrapper[4832]: W1204 06:11:31.501591 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-cb9b1e48002608a255dc1cf485d8c32a69eba569721fe14f7e644291191a6e29 WatchSource:0}: Error finding container cb9b1e48002608a255dc1cf485d8c32a69eba569721fe14f7e644291191a6e29: Status 404 returned error can't find the container with id cb9b1e48002608a255dc1cf485d8c32a69eba569721fe14f7e644291191a6e29 Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.613161 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.721226 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.721272 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.737527 4832 patch_prober.go:28] interesting pod/console-f9d7485db-g2thm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.737606 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2thm" podUID="50fb7e5f-0fc6-47d2-a953-8fece3489792" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.761929 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"cb9b1e48002608a255dc1cf485d8c32a69eba569721fe14f7e644291191a6e29"} Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.767344 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" event={"ID":"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4","Type":"ContainerStarted","Data":"17e47599866d4d78e646e3246acca307c0241691fd44f09ab66ae34af88563e5"} Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.767429 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" event={"ID":"020db14a-b4ac-432d-8c8a-bd3ae7cac2b4","Type":"ContainerStarted","Data":"9de5cfe5614488a49e0d767072c508585c2852ce667bd867febbd3c072059970"} Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.844364 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:31 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:31 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:31 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.844518 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.851826 4832 generic.go:334] "Generic (PLEG): container finished" podID="cb8cf355-6567-45c5-a82f-29ef50c27d15" containerID="50856e7486e3a1db61bf975b823fc855b5684f5b39b6142ff7c64b9903146af0" exitCode=0 Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.851986 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cb8cf355-6567-45c5-a82f-29ef50c27d15","Type":"ContainerDied","Data":"50856e7486e3a1db61bf975b823fc855b5684f5b39b6142ff7c64b9903146af0"} Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.890525 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" event={"ID":"d9f05718-aaf5-41f3-94b2-026b8eb39474","Type":"ContainerStarted","Data":"0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71"} Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.890952 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" event={"ID":"d9f05718-aaf5-41f3-94b2-026b8eb39474","Type":"ContainerStarted","Data":"1fe678d65024cd4f18f476e40d7cf4127e1f1bff9c5903deecf2e20ddfab855d"} Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.892102 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.893896 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6cba01e987e1baae4902140b5b3f6103c105d12103836974dc30abee309af6f7"} Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.896877 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"013b0d7b-05df-413f-8665-73c5f0cfcc42","Type":"ContainerStarted","Data":"266483f872a612ac63f285bef31f3dcb53a88ac53fad9c394aaef8f651fb2ae6"} Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.924147 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" podStartSLOduration=127.924130243 podStartE2EDuration="2m7.924130243s" podCreationTimestamp="2025-12-04 06:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:31.913674466 +0000 UTC m=+147.526492172" watchObservedRunningTime="2025-12-04 06:11:31.924130243 +0000 UTC m=+147.536947949" Dec 04 06:11:31 crc kubenswrapper[4832]: I1204 06:11:31.932820 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.932775067 podStartE2EDuration="2.932775067s" podCreationTimestamp="2025-12-04 06:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:31.928295136 +0000 UTC m=+147.541112842" watchObservedRunningTime="2025-12-04 06:11:31.932775067 +0000 UTC m=+147.545592773" Dec 04 06:11:32 crc kubenswrapper[4832]: I1204 06:11:32.839981 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:32 crc kubenswrapper[4832]: I1204 06:11:32.843001 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:32 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:32 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:32 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:32 crc kubenswrapper[4832]: I1204 06:11:32.843052 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:32 crc kubenswrapper[4832]: I1204 06:11:32.911290 4832 generic.go:334] "Generic (PLEG): container finished" podID="013b0d7b-05df-413f-8665-73c5f0cfcc42" containerID="266483f872a612ac63f285bef31f3dcb53a88ac53fad9c394aaef8f651fb2ae6" exitCode=0 Dec 04 06:11:32 crc kubenswrapper[4832]: I1204 06:11:32.911445 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"013b0d7b-05df-413f-8665-73c5f0cfcc42","Type":"ContainerDied","Data":"266483f872a612ac63f285bef31f3dcb53a88ac53fad9c394aaef8f651fb2ae6"} Dec 04 06:11:32 crc kubenswrapper[4832]: I1204 06:11:32.914933 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a2ec83254b7d7daa7741a5626168547d5fcaa9c4661a5df1e29b47b754a16c3d"} Dec 04 06:11:32 crc kubenswrapper[4832]: I1204 06:11:32.915501 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:11:32 crc kubenswrapper[4832]: I1204 06:11:32.919782 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"950896882a9d5f0b8c0619bc137fe3dd695b33cd33af17b0b1d9dab226486334"} Dec 04 06:11:32 crc kubenswrapper[4832]: I1204 06:11:32.935749 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e1a3e93d35d53b53fc5b289122bfd5e885ab0e5a9f33cf680427b804f44e4c56"} Dec 04 06:11:32 crc kubenswrapper[4832]: I1204 06:11:32.935815 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7f136503bbbb4500f8b7e00d3f636afc1a8bb5ab3035e01633b2d0ad15143381"} Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.007039 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-fw5rj" podStartSLOduration=14.007021971 podStartE2EDuration="14.007021971s" podCreationTimestamp="2025-12-04 06:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:11:33.004699584 +0000 UTC m=+148.617517290" watchObservedRunningTime="2025-12-04 06:11:33.007021971 +0000 UTC m=+148.619839677" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.107985 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.108030 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.117090 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.146013 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.146082 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.148841 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.164160 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.389599 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.493556 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb8cf355-6567-45c5-a82f-29ef50c27d15-kube-api-access\") pod \"cb8cf355-6567-45c5-a82f-29ef50c27d15\" (UID: \"cb8cf355-6567-45c5-a82f-29ef50c27d15\") " Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.493699 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb8cf355-6567-45c5-a82f-29ef50c27d15-kubelet-dir\") pod \"cb8cf355-6567-45c5-a82f-29ef50c27d15\" (UID: \"cb8cf355-6567-45c5-a82f-29ef50c27d15\") " Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.493844 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb8cf355-6567-45c5-a82f-29ef50c27d15-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cb8cf355-6567-45c5-a82f-29ef50c27d15" (UID: "cb8cf355-6567-45c5-a82f-29ef50c27d15"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.494153 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb8cf355-6567-45c5-a82f-29ef50c27d15-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.500218 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8cf355-6567-45c5-a82f-29ef50c27d15-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cb8cf355-6567-45c5-a82f-29ef50c27d15" (UID: "cb8cf355-6567-45c5-a82f-29ef50c27d15"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.595979 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb8cf355-6567-45c5-a82f-29ef50c27d15-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.841408 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:33 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:33 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:33 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.841471 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.973232 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.973311 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cb8cf355-6567-45c5-a82f-29ef50c27d15","Type":"ContainerDied","Data":"4d81ec0a6efd518b1d5743ceb6ac7ca4c06da129cd522653380031a048e1d58e"} Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.973367 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d81ec0a6efd518b1d5743ceb6ac7ca4c06da129cd522653380031a048e1d58e" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.977763 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9s7hj" Dec 04 06:11:33 crc kubenswrapper[4832]: I1204 06:11:33.978197 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-wtnbm" Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.376158 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.409938 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013b0d7b-05df-413f-8665-73c5f0cfcc42-kube-api-access\") pod \"013b0d7b-05df-413f-8665-73c5f0cfcc42\" (UID: \"013b0d7b-05df-413f-8665-73c5f0cfcc42\") " Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.409989 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013b0d7b-05df-413f-8665-73c5f0cfcc42-kubelet-dir\") pod \"013b0d7b-05df-413f-8665-73c5f0cfcc42\" (UID: \"013b0d7b-05df-413f-8665-73c5f0cfcc42\") " Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.410494 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/013b0d7b-05df-413f-8665-73c5f0cfcc42-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "013b0d7b-05df-413f-8665-73c5f0cfcc42" (UID: "013b0d7b-05df-413f-8665-73c5f0cfcc42"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.426790 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013b0d7b-05df-413f-8665-73c5f0cfcc42-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "013b0d7b-05df-413f-8665-73c5f0cfcc42" (UID: "013b0d7b-05df-413f-8665-73c5f0cfcc42"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.512049 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/013b0d7b-05df-413f-8665-73c5f0cfcc42-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.512083 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/013b0d7b-05df-413f-8665-73c5f0cfcc42-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.840619 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:34 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:34 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:34 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.840675 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.890520 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-t8dcw" Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.981880 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"013b0d7b-05df-413f-8665-73c5f0cfcc42","Type":"ContainerDied","Data":"7881cda049176fcbd33a64d8cb3a3529665704c2405212dc01e83f63b0f8019d"} Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.981922 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7881cda049176fcbd33a64d8cb3a3529665704c2405212dc01e83f63b0f8019d" Dec 04 06:11:34 crc kubenswrapper[4832]: I1204 06:11:34.982083 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 06:11:35 crc kubenswrapper[4832]: I1204 06:11:35.363458 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:11:35 crc kubenswrapper[4832]: I1204 06:11:35.364087 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:11:35 crc kubenswrapper[4832]: I1204 06:11:35.840451 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:35 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:35 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:35 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:35 crc kubenswrapper[4832]: I1204 06:11:35.840523 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:36 crc kubenswrapper[4832]: I1204 06:11:36.848942 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:36 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:36 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:36 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:36 crc kubenswrapper[4832]: I1204 06:11:36.848997 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:37 crc kubenswrapper[4832]: I1204 06:11:37.840581 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:37 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:37 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:37 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:37 crc kubenswrapper[4832]: I1204 06:11:37.840894 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:38 crc kubenswrapper[4832]: I1204 06:11:38.840808 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:38 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:38 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:38 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:38 crc kubenswrapper[4832]: I1204 06:11:38.841282 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:39 crc kubenswrapper[4832]: I1204 06:11:39.840711 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:39 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:39 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:39 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:39 crc kubenswrapper[4832]: I1204 06:11:39.840763 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:40 crc kubenswrapper[4832]: I1204 06:11:40.846054 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:40 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:40 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:40 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:40 crc kubenswrapper[4832]: I1204 06:11:40.846119 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:41 crc kubenswrapper[4832]: I1204 06:11:41.414377 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tw7nf" Dec 04 06:11:41 crc kubenswrapper[4832]: I1204 06:11:41.722095 4832 patch_prober.go:28] interesting pod/console-f9d7485db-g2thm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Dec 04 06:11:41 crc kubenswrapper[4832]: I1204 06:11:41.722167 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g2thm" podUID="50fb7e5f-0fc6-47d2-a953-8fece3489792" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Dec 04 06:11:41 crc kubenswrapper[4832]: I1204 06:11:41.839987 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:41 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:41 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:41 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:41 crc kubenswrapper[4832]: I1204 06:11:41.840038 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:42 crc kubenswrapper[4832]: I1204 06:11:42.840860 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:42 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:42 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:42 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:42 crc kubenswrapper[4832]: I1204 06:11:42.840945 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:43 crc kubenswrapper[4832]: I1204 06:11:43.841295 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:43 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:43 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:43 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:43 crc kubenswrapper[4832]: I1204 06:11:43.841372 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:44 crc kubenswrapper[4832]: I1204 06:11:44.840777 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:44 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:44 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:44 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:44 crc kubenswrapper[4832]: I1204 06:11:44.840836 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:45 crc kubenswrapper[4832]: I1204 06:11:45.803906 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:11:45 crc kubenswrapper[4832]: I1204 06:11:45.810456 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37ab4745-26f8-4cb8-a4c4-c3064251922e-metrics-certs\") pod \"network-metrics-daemon-ctzsn\" (UID: \"37ab4745-26f8-4cb8-a4c4-c3064251922e\") " pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:11:45 crc kubenswrapper[4832]: I1204 06:11:45.840764 4832 patch_prober.go:28] interesting pod/router-default-5444994796-jdgxv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 06:11:45 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Dec 04 06:11:45 crc kubenswrapper[4832]: [+]process-running ok Dec 04 06:11:45 crc kubenswrapper[4832]: healthz check failed Dec 04 06:11:45 crc kubenswrapper[4832]: I1204 06:11:45.840834 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jdgxv" podUID="3bdc749a-22f3-4cb8-b987-04f7bc297cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 06:11:45 crc kubenswrapper[4832]: I1204 06:11:45.840909 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ctzsn" Dec 04 06:11:46 crc kubenswrapper[4832]: I1204 06:11:46.228639 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ctzsn"] Dec 04 06:11:46 crc kubenswrapper[4832]: I1204 06:11:46.844010 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:46 crc kubenswrapper[4832]: I1204 06:11:46.846264 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jdgxv" Dec 04 06:11:50 crc kubenswrapper[4832]: I1204 06:11:50.142057 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:11:51 crc kubenswrapper[4832]: W1204 06:11:51.070250 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37ab4745_26f8_4cb8_a4c4_c3064251922e.slice/crio-0e565585746274a29a0137c0d0b5ac093bb678fc3c37c30b2bf8ab539f38077e WatchSource:0}: Error finding container 0e565585746274a29a0137c0d0b5ac093bb678fc3c37c30b2bf8ab539f38077e: Status 404 returned error can't find the container with id 0e565585746274a29a0137c0d0b5ac093bb678fc3c37c30b2bf8ab539f38077e Dec 04 06:11:51 crc kubenswrapper[4832]: I1204 06:11:51.094030 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" event={"ID":"37ab4745-26f8-4cb8-a4c4-c3064251922e","Type":"ContainerStarted","Data":"0e565585746274a29a0137c0d0b5ac093bb678fc3c37c30b2bf8ab539f38077e"} Dec 04 06:11:51 crc kubenswrapper[4832]: I1204 06:11:51.727010 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:51 crc kubenswrapper[4832]: I1204 06:11:51.731359 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:11:57 crc kubenswrapper[4832]: E1204 06:11:57.678776 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 04 06:11:57 crc kubenswrapper[4832]: E1204 06:11:57.679497 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnw28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bs4tj_openshift-marketplace(8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 06:11:57 crc kubenswrapper[4832]: E1204 06:11:57.680691 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-bs4tj" podUID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.181278 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bs4tj" podUID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.283183 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.283851 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pwh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qss9l_openshift-marketplace(e3733728-9f9d-45f8-af66-d0427bb7cfe1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.285707 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qss9l" podUID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.297832 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.297963 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkrwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nbw42_openshift-marketplace(29b70a28-7b0c-486b-a0f9-76e2e877cf26): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.299335 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-nbw42" podUID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.327284 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.327437 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bwcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vlppg_openshift-marketplace(1abbca7a-e500-4733-9160-2a34dcfa0531): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.328596 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vlppg" podUID="1abbca7a-e500-4733-9160-2a34dcfa0531" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.343450 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.343649 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dqjwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-j295k_openshift-marketplace(11f4fe16-d42c-4aaf-9b33-4ab8f93e2930): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 06:12:00 crc kubenswrapper[4832]: E1204 06:12:00.344925 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-j295k" podUID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" Dec 04 06:12:01 crc kubenswrapper[4832]: I1204 06:12:01.148677 4832 generic.go:334] "Generic (PLEG): container finished" podID="f0f3ccce-259a-43f4-883e-a8f278c34053" containerID="9dc81fafe9f98df71e553cea5dbe8edec8ac54275ba7665b3fd9e9037639ffb0" exitCode=0 Dec 04 06:12:01 crc kubenswrapper[4832]: I1204 06:12:01.148819 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzdgh" event={"ID":"f0f3ccce-259a-43f4-883e-a8f278c34053","Type":"ContainerDied","Data":"9dc81fafe9f98df71e553cea5dbe8edec8ac54275ba7665b3fd9e9037639ffb0"} Dec 04 06:12:01 crc kubenswrapper[4832]: I1204 06:12:01.152294 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" event={"ID":"37ab4745-26f8-4cb8-a4c4-c3064251922e","Type":"ContainerStarted","Data":"18dfdba67c1a72e08a15b9edc3072ba3a20bc2798590c030cb8cd692a622398a"} Dec 04 06:12:01 crc kubenswrapper[4832]: I1204 06:12:01.152329 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ctzsn" event={"ID":"37ab4745-26f8-4cb8-a4c4-c3064251922e","Type":"ContainerStarted","Data":"fefbcb197f90c0eb0fced6f9d5dc899f358fcfd94092be8962360899ee34a4a8"} Dec 04 06:12:01 crc kubenswrapper[4832]: I1204 06:12:01.154215 4832 generic.go:334] "Generic (PLEG): container finished" podID="3eb4072b-8c81-4808-b3a6-9be9fc814060" containerID="ced1e491e6f3d3014b4e4d1f30a4f7f20ee24ae1f11df3e4e909306c4b4dd9e2" exitCode=0 Dec 04 06:12:01 crc kubenswrapper[4832]: I1204 06:12:01.154275 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mt7dw" event={"ID":"3eb4072b-8c81-4808-b3a6-9be9fc814060","Type":"ContainerDied","Data":"ced1e491e6f3d3014b4e4d1f30a4f7f20ee24ae1f11df3e4e909306c4b4dd9e2"} Dec 04 06:12:01 crc kubenswrapper[4832]: I1204 06:12:01.155991 4832 generic.go:334] "Generic (PLEG): container finished" podID="adfeb9cb-8e12-4ba9-aafe-8753a774d720" containerID="9ebb85bf47f594c68efad7b6e4dad2189d5a8692eaf58336d248300b80d53e65" exitCode=0 Dec 04 06:12:01 crc kubenswrapper[4832]: I1204 06:12:01.156075 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2d94" event={"ID":"adfeb9cb-8e12-4ba9-aafe-8753a774d720","Type":"ContainerDied","Data":"9ebb85bf47f594c68efad7b6e4dad2189d5a8692eaf58336d248300b80d53e65"} Dec 04 06:12:01 crc kubenswrapper[4832]: E1204 06:12:01.157505 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qss9l" podUID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" Dec 04 06:12:01 crc kubenswrapper[4832]: E1204 06:12:01.157517 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vlppg" podUID="1abbca7a-e500-4733-9160-2a34dcfa0531" Dec 04 06:12:01 crc kubenswrapper[4832]: E1204 06:12:01.157617 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-j295k" podUID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" Dec 04 06:12:01 crc kubenswrapper[4832]: E1204 06:12:01.158961 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nbw42" podUID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" Dec 04 06:12:01 crc kubenswrapper[4832]: I1204 06:12:01.246348 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-ctzsn" podStartSLOduration=158.246330078 podStartE2EDuration="2m38.246330078s" podCreationTimestamp="2025-12-04 06:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:12:01.245726513 +0000 UTC m=+176.858544219" watchObservedRunningTime="2025-12-04 06:12:01.246330078 +0000 UTC m=+176.859147784" Dec 04 06:12:02 crc kubenswrapper[4832]: I1204 06:12:02.164108 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzdgh" event={"ID":"f0f3ccce-259a-43f4-883e-a8f278c34053","Type":"ContainerStarted","Data":"99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea"} Dec 04 06:12:02 crc kubenswrapper[4832]: I1204 06:12:02.166427 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mt7dw" event={"ID":"3eb4072b-8c81-4808-b3a6-9be9fc814060","Type":"ContainerStarted","Data":"662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca"} Dec 04 06:12:02 crc kubenswrapper[4832]: I1204 06:12:02.170200 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2d94" event={"ID":"adfeb9cb-8e12-4ba9-aafe-8753a774d720","Type":"ContainerStarted","Data":"d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f"} Dec 04 06:12:02 crc kubenswrapper[4832]: I1204 06:12:02.185186 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qzdgh" podStartSLOduration=5.006429757 podStartE2EDuration="38.185167256s" podCreationTimestamp="2025-12-04 06:11:24 +0000 UTC" firstStartedPulling="2025-12-04 06:11:28.386180371 +0000 UTC m=+143.998998077" lastFinishedPulling="2025-12-04 06:12:01.56491787 +0000 UTC m=+177.177735576" observedRunningTime="2025-12-04 06:12:02.18288258 +0000 UTC m=+177.795700296" watchObservedRunningTime="2025-12-04 06:12:02.185167256 +0000 UTC m=+177.797984972" Dec 04 06:12:02 crc kubenswrapper[4832]: I1204 06:12:02.208518 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v2d94" podStartSLOduration=3.922849058 podStartE2EDuration="37.208500321s" podCreationTimestamp="2025-12-04 06:11:25 +0000 UTC" firstStartedPulling="2025-12-04 06:11:28.260619547 +0000 UTC m=+143.873437253" lastFinishedPulling="2025-12-04 06:12:01.54627081 +0000 UTC m=+177.159088516" observedRunningTime="2025-12-04 06:12:02.205962698 +0000 UTC m=+177.818780414" watchObservedRunningTime="2025-12-04 06:12:02.208500321 +0000 UTC m=+177.821318027" Dec 04 06:12:02 crc kubenswrapper[4832]: I1204 06:12:02.226315 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mt7dw" podStartSLOduration=3.84505621 podStartE2EDuration="37.22629759s" podCreationTimestamp="2025-12-04 06:11:25 +0000 UTC" firstStartedPulling="2025-12-04 06:11:28.220695212 +0000 UTC m=+143.833512918" lastFinishedPulling="2025-12-04 06:12:01.601936592 +0000 UTC m=+177.214754298" observedRunningTime="2025-12-04 06:12:02.221966923 +0000 UTC m=+177.834784639" watchObservedRunningTime="2025-12-04 06:12:02.22629759 +0000 UTC m=+177.839115296" Dec 04 06:12:03 crc kubenswrapper[4832]: I1204 06:12:03.138750 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nqbbx" Dec 04 06:12:05 crc kubenswrapper[4832]: I1204 06:12:05.272945 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:12:05 crc kubenswrapper[4832]: I1204 06:12:05.273430 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:12:05 crc kubenswrapper[4832]: I1204 06:12:05.345848 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:12:05 crc kubenswrapper[4832]: I1204 06:12:05.362158 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:12:05 crc kubenswrapper[4832]: I1204 06:12:05.362220 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:12:05 crc kubenswrapper[4832]: I1204 06:12:05.624374 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:12:05 crc kubenswrapper[4832]: I1204 06:12:05.624795 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:12:05 crc kubenswrapper[4832]: I1204 06:12:05.667430 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.023803 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.023861 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.070704 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.227230 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.234461 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.239337 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.804442 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 04 06:12:06 crc kubenswrapper[4832]: E1204 06:12:06.804683 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc09cb39-1b31-47c6-88c7-8c15d31c4960" containerName="collect-profiles" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.804695 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc09cb39-1b31-47c6-88c7-8c15d31c4960" containerName="collect-profiles" Dec 04 06:12:06 crc kubenswrapper[4832]: E1204 06:12:06.804717 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8cf355-6567-45c5-a82f-29ef50c27d15" containerName="pruner" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.804723 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8cf355-6567-45c5-a82f-29ef50c27d15" containerName="pruner" Dec 04 06:12:06 crc kubenswrapper[4832]: E1204 06:12:06.804734 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013b0d7b-05df-413f-8665-73c5f0cfcc42" containerName="pruner" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.804740 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="013b0d7b-05df-413f-8665-73c5f0cfcc42" containerName="pruner" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.804829 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8cf355-6567-45c5-a82f-29ef50c27d15" containerName="pruner" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.804848 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc09cb39-1b31-47c6-88c7-8c15d31c4960" containerName="collect-profiles" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.804857 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="013b0d7b-05df-413f-8665-73c5f0cfcc42" containerName="pruner" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.805266 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.807238 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.807523 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.808004 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.881143 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7af40de6-5f8c-4a76-b683-d8f412d5dae5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7af40de6-5f8c-4a76-b683-d8f412d5dae5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.881195 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7af40de6-5f8c-4a76-b683-d8f412d5dae5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7af40de6-5f8c-4a76-b683-d8f412d5dae5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.982572 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7af40de6-5f8c-4a76-b683-d8f412d5dae5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7af40de6-5f8c-4a76-b683-d8f412d5dae5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.982632 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7af40de6-5f8c-4a76-b683-d8f412d5dae5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7af40de6-5f8c-4a76-b683-d8f412d5dae5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 06:12:06 crc kubenswrapper[4832]: I1204 06:12:06.982739 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7af40de6-5f8c-4a76-b683-d8f412d5dae5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7af40de6-5f8c-4a76-b683-d8f412d5dae5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 06:12:07 crc kubenswrapper[4832]: I1204 06:12:07.106760 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7af40de6-5f8c-4a76-b683-d8f412d5dae5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7af40de6-5f8c-4a76-b683-d8f412d5dae5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 06:12:07 crc kubenswrapper[4832]: I1204 06:12:07.406659 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 06:12:07 crc kubenswrapper[4832]: I1204 06:12:07.581986 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v2d94"] Dec 04 06:12:07 crc kubenswrapper[4832]: I1204 06:12:07.777915 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.207926 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7af40de6-5f8c-4a76-b683-d8f412d5dae5","Type":"ContainerStarted","Data":"2850e86d09135b51aa2ba0f893da2f9c513b33be8ec9b8dc03c83aef10eddc78"} Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.208249 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7af40de6-5f8c-4a76-b683-d8f412d5dae5","Type":"ContainerStarted","Data":"894f71de3d425410e12668f1c8ccd05fdf899fbababd2c47ed4e5b67ab219eef"} Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.208093 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v2d94" podUID="adfeb9cb-8e12-4ba9-aafe-8753a774d720" containerName="registry-server" containerID="cri-o://d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f" gracePeriod=2 Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.229452 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.229431336 podStartE2EDuration="2.229431336s" podCreationTimestamp="2025-12-04 06:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:12:08.228280378 +0000 UTC m=+183.841098084" watchObservedRunningTime="2025-12-04 06:12:08.229431336 +0000 UTC m=+183.842249042" Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.626003 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.706015 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adfeb9cb-8e12-4ba9-aafe-8753a774d720-catalog-content\") pod \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\" (UID: \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\") " Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.706145 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtl8g\" (UniqueName: \"kubernetes.io/projected/adfeb9cb-8e12-4ba9-aafe-8753a774d720-kube-api-access-qtl8g\") pod \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\" (UID: \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\") " Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.706198 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adfeb9cb-8e12-4ba9-aafe-8753a774d720-utilities\") pod \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\" (UID: \"adfeb9cb-8e12-4ba9-aafe-8753a774d720\") " Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.707034 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adfeb9cb-8e12-4ba9-aafe-8753a774d720-utilities" (OuterVolumeSpecName: "utilities") pod "adfeb9cb-8e12-4ba9-aafe-8753a774d720" (UID: "adfeb9cb-8e12-4ba9-aafe-8753a774d720"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.712816 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adfeb9cb-8e12-4ba9-aafe-8753a774d720-kube-api-access-qtl8g" (OuterVolumeSpecName: "kube-api-access-qtl8g") pod "adfeb9cb-8e12-4ba9-aafe-8753a774d720" (UID: "adfeb9cb-8e12-4ba9-aafe-8753a774d720"). InnerVolumeSpecName "kube-api-access-qtl8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.757459 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adfeb9cb-8e12-4ba9-aafe-8753a774d720-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adfeb9cb-8e12-4ba9-aafe-8753a774d720" (UID: "adfeb9cb-8e12-4ba9-aafe-8753a774d720"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.807327 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adfeb9cb-8e12-4ba9-aafe-8753a774d720-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.807362 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adfeb9cb-8e12-4ba9-aafe-8753a774d720-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:08 crc kubenswrapper[4832]: I1204 06:12:08.807373 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtl8g\" (UniqueName: \"kubernetes.io/projected/adfeb9cb-8e12-4ba9-aafe-8753a774d720-kube-api-access-qtl8g\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.216824 4832 generic.go:334] "Generic (PLEG): container finished" podID="7af40de6-5f8c-4a76-b683-d8f412d5dae5" containerID="2850e86d09135b51aa2ba0f893da2f9c513b33be8ec9b8dc03c83aef10eddc78" exitCode=0 Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.216922 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7af40de6-5f8c-4a76-b683-d8f412d5dae5","Type":"ContainerDied","Data":"2850e86d09135b51aa2ba0f893da2f9c513b33be8ec9b8dc03c83aef10eddc78"} Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.222788 4832 generic.go:334] "Generic (PLEG): container finished" podID="adfeb9cb-8e12-4ba9-aafe-8753a774d720" containerID="d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f" exitCode=0 Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.222875 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v2d94" Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.222867 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2d94" event={"ID":"adfeb9cb-8e12-4ba9-aafe-8753a774d720","Type":"ContainerDied","Data":"d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f"} Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.222949 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2d94" event={"ID":"adfeb9cb-8e12-4ba9-aafe-8753a774d720","Type":"ContainerDied","Data":"bc44b36800ea4b6a398f8c8e8db7f7ac23f2a3b2c5dc89246c3809950f282e4f"} Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.222972 4832 scope.go:117] "RemoveContainer" containerID="d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f" Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.240089 4832 scope.go:117] "RemoveContainer" containerID="9ebb85bf47f594c68efad7b6e4dad2189d5a8692eaf58336d248300b80d53e65" Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.256719 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v2d94"] Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.261015 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v2d94"] Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.276779 4832 scope.go:117] "RemoveContainer" containerID="1bed0c57c85c91c405a1e46509cea3e3e604ff136a9eeab74eaa2379c2306e1a" Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.289805 4832 scope.go:117] "RemoveContainer" containerID="d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f" Dec 04 06:12:09 crc kubenswrapper[4832]: E1204 06:12:09.290179 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f\": container with ID starting with d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f not found: ID does not exist" containerID="d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f" Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.290228 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f"} err="failed to get container status \"d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f\": rpc error: code = NotFound desc = could not find container \"d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f\": container with ID starting with d9cdca8eeadd732624e437f4d0054f9fce2ada9eff463aa0173b6c5b54cb339f not found: ID does not exist" Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.290280 4832 scope.go:117] "RemoveContainer" containerID="9ebb85bf47f594c68efad7b6e4dad2189d5a8692eaf58336d248300b80d53e65" Dec 04 06:12:09 crc kubenswrapper[4832]: E1204 06:12:09.290533 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ebb85bf47f594c68efad7b6e4dad2189d5a8692eaf58336d248300b80d53e65\": container with ID starting with 9ebb85bf47f594c68efad7b6e4dad2189d5a8692eaf58336d248300b80d53e65 not found: ID does not exist" containerID="9ebb85bf47f594c68efad7b6e4dad2189d5a8692eaf58336d248300b80d53e65" Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.290555 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ebb85bf47f594c68efad7b6e4dad2189d5a8692eaf58336d248300b80d53e65"} err="failed to get container status \"9ebb85bf47f594c68efad7b6e4dad2189d5a8692eaf58336d248300b80d53e65\": rpc error: code = NotFound desc = could not find container \"9ebb85bf47f594c68efad7b6e4dad2189d5a8692eaf58336d248300b80d53e65\": container with ID starting with 9ebb85bf47f594c68efad7b6e4dad2189d5a8692eaf58336d248300b80d53e65 not found: ID does not exist" Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.290568 4832 scope.go:117] "RemoveContainer" containerID="1bed0c57c85c91c405a1e46509cea3e3e604ff136a9eeab74eaa2379c2306e1a" Dec 04 06:12:09 crc kubenswrapper[4832]: E1204 06:12:09.290822 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bed0c57c85c91c405a1e46509cea3e3e604ff136a9eeab74eaa2379c2306e1a\": container with ID starting with 1bed0c57c85c91c405a1e46509cea3e3e604ff136a9eeab74eaa2379c2306e1a not found: ID does not exist" containerID="1bed0c57c85c91c405a1e46509cea3e3e604ff136a9eeab74eaa2379c2306e1a" Dec 04 06:12:09 crc kubenswrapper[4832]: I1204 06:12:09.290850 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bed0c57c85c91c405a1e46509cea3e3e604ff136a9eeab74eaa2379c2306e1a"} err="failed to get container status \"1bed0c57c85c91c405a1e46509cea3e3e604ff136a9eeab74eaa2379c2306e1a\": rpc error: code = NotFound desc = could not find container \"1bed0c57c85c91c405a1e46509cea3e3e604ff136a9eeab74eaa2379c2306e1a\": container with ID starting with 1bed0c57c85c91c405a1e46509cea3e3e604ff136a9eeab74eaa2379c2306e1a not found: ID does not exist" Dec 04 06:12:10 crc kubenswrapper[4832]: I1204 06:12:10.118027 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vcj7x"] Dec 04 06:12:10 crc kubenswrapper[4832]: I1204 06:12:10.533214 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 06:12:10 crc kubenswrapper[4832]: I1204 06:12:10.626059 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7af40de6-5f8c-4a76-b683-d8f412d5dae5-kube-api-access\") pod \"7af40de6-5f8c-4a76-b683-d8f412d5dae5\" (UID: \"7af40de6-5f8c-4a76-b683-d8f412d5dae5\") " Dec 04 06:12:10 crc kubenswrapper[4832]: I1204 06:12:10.626166 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7af40de6-5f8c-4a76-b683-d8f412d5dae5-kubelet-dir\") pod \"7af40de6-5f8c-4a76-b683-d8f412d5dae5\" (UID: \"7af40de6-5f8c-4a76-b683-d8f412d5dae5\") " Dec 04 06:12:10 crc kubenswrapper[4832]: I1204 06:12:10.626300 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7af40de6-5f8c-4a76-b683-d8f412d5dae5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7af40de6-5f8c-4a76-b683-d8f412d5dae5" (UID: "7af40de6-5f8c-4a76-b683-d8f412d5dae5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:12:10 crc kubenswrapper[4832]: I1204 06:12:10.626837 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7af40de6-5f8c-4a76-b683-d8f412d5dae5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:10 crc kubenswrapper[4832]: I1204 06:12:10.630335 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af40de6-5f8c-4a76-b683-d8f412d5dae5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7af40de6-5f8c-4a76-b683-d8f412d5dae5" (UID: "7af40de6-5f8c-4a76-b683-d8f412d5dae5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:12:10 crc kubenswrapper[4832]: I1204 06:12:10.720152 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adfeb9cb-8e12-4ba9-aafe-8753a774d720" path="/var/lib/kubelet/pods/adfeb9cb-8e12-4ba9-aafe-8753a774d720/volumes" Dec 04 06:12:10 crc kubenswrapper[4832]: I1204 06:12:10.728593 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7af40de6-5f8c-4a76-b683-d8f412d5dae5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:10 crc kubenswrapper[4832]: I1204 06:12:10.853265 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 06:12:11 crc kubenswrapper[4832]: I1204 06:12:11.235166 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7af40de6-5f8c-4a76-b683-d8f412d5dae5","Type":"ContainerDied","Data":"894f71de3d425410e12668f1c8ccd05fdf899fbababd2c47ed4e5b67ab219eef"} Dec 04 06:12:11 crc kubenswrapper[4832]: I1204 06:12:11.235207 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894f71de3d425410e12668f1c8ccd05fdf899fbababd2c47ed4e5b67ab219eef" Dec 04 06:12:11 crc kubenswrapper[4832]: I1204 06:12:11.235225 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.247240 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qss9l" event={"ID":"e3733728-9f9d-45f8-af66-d0427bb7cfe1","Type":"ContainerStarted","Data":"55543fc6387936b2f243eef1c3fdf4a0096ff2ae56cafadbeffb898f680dff8e"} Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.796175 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 04 06:12:13 crc kubenswrapper[4832]: E1204 06:12:13.796374 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfeb9cb-8e12-4ba9-aafe-8753a774d720" containerName="extract-utilities" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.796385 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfeb9cb-8e12-4ba9-aafe-8753a774d720" containerName="extract-utilities" Dec 04 06:12:13 crc kubenswrapper[4832]: E1204 06:12:13.796410 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfeb9cb-8e12-4ba9-aafe-8753a774d720" containerName="extract-content" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.796416 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfeb9cb-8e12-4ba9-aafe-8753a774d720" containerName="extract-content" Dec 04 06:12:13 crc kubenswrapper[4832]: E1204 06:12:13.796425 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfeb9cb-8e12-4ba9-aafe-8753a774d720" containerName="registry-server" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.796430 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfeb9cb-8e12-4ba9-aafe-8753a774d720" containerName="registry-server" Dec 04 06:12:13 crc kubenswrapper[4832]: E1204 06:12:13.796438 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af40de6-5f8c-4a76-b683-d8f412d5dae5" containerName="pruner" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.796445 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af40de6-5f8c-4a76-b683-d8f412d5dae5" containerName="pruner" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.796552 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfeb9cb-8e12-4ba9-aafe-8753a774d720" containerName="registry-server" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.796564 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af40de6-5f8c-4a76-b683-d8f412d5dae5" containerName="pruner" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.796911 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.799496 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.800453 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.808614 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.866324 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-var-lock\") pod \"installer-9-crc\" (UID: \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.866380 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.866638 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-kube-api-access\") pod \"installer-9-crc\" (UID: \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.968418 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-kube-api-access\") pod \"installer-9-crc\" (UID: \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.968497 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-var-lock\") pod \"installer-9-crc\" (UID: \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.968524 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.968621 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.968803 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-var-lock\") pod \"installer-9-crc\" (UID: \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:13 crc kubenswrapper[4832]: I1204 06:12:13.987266 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-kube-api-access\") pod \"installer-9-crc\" (UID: \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:14 crc kubenswrapper[4832]: I1204 06:12:14.112446 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:14 crc kubenswrapper[4832]: I1204 06:12:14.260721 4832 generic.go:334] "Generic (PLEG): container finished" podID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" containerID="69692ccc57b698c9501b3a123ee19cc5ec6a17835714decbcf1560b5198cf074" exitCode=0 Dec 04 06:12:14 crc kubenswrapper[4832]: I1204 06:12:14.260826 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbw42" event={"ID":"29b70a28-7b0c-486b-a0f9-76e2e877cf26","Type":"ContainerDied","Data":"69692ccc57b698c9501b3a123ee19cc5ec6a17835714decbcf1560b5198cf074"} Dec 04 06:12:14 crc kubenswrapper[4832]: I1204 06:12:14.264625 4832 generic.go:334] "Generic (PLEG): container finished" podID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" containerID="55543fc6387936b2f243eef1c3fdf4a0096ff2ae56cafadbeffb898f680dff8e" exitCode=0 Dec 04 06:12:14 crc kubenswrapper[4832]: I1204 06:12:14.264672 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qss9l" event={"ID":"e3733728-9f9d-45f8-af66-d0427bb7cfe1","Type":"ContainerDied","Data":"55543fc6387936b2f243eef1c3fdf4a0096ff2ae56cafadbeffb898f680dff8e"} Dec 04 06:12:14 crc kubenswrapper[4832]: I1204 06:12:14.530930 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 04 06:12:14 crc kubenswrapper[4832]: W1204 06:12:14.538034 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc4b62cbc_fdc4_4d4b_85eb_2d5a2289bb3d.slice/crio-e871b6822f703039d4f08a1bac965e824a6c7d62cd4c01da5d065502c390095c WatchSource:0}: Error finding container e871b6822f703039d4f08a1bac965e824a6c7d62cd4c01da5d065502c390095c: Status 404 returned error can't find the container with id e871b6822f703039d4f08a1bac965e824a6c7d62cd4c01da5d065502c390095c Dec 04 06:12:15 crc kubenswrapper[4832]: I1204 06:12:15.271268 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d","Type":"ContainerStarted","Data":"e999f925c77dce9ae7a8c696033d22b18d6b153e2d18001086121a1046d189d0"} Dec 04 06:12:15 crc kubenswrapper[4832]: I1204 06:12:15.271798 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d","Type":"ContainerStarted","Data":"e871b6822f703039d4f08a1bac965e824a6c7d62cd4c01da5d065502c390095c"} Dec 04 06:12:15 crc kubenswrapper[4832]: I1204 06:12:15.273850 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qss9l" event={"ID":"e3733728-9f9d-45f8-af66-d0427bb7cfe1","Type":"ContainerStarted","Data":"1a8d956ece5fe13a8c5071eaea35ae61461b1e8ec4a21957886e11b320cecab3"} Dec 04 06:12:15 crc kubenswrapper[4832]: I1204 06:12:15.279588 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbw42" event={"ID":"29b70a28-7b0c-486b-a0f9-76e2e877cf26","Type":"ContainerStarted","Data":"91facbd5dadbceb19afc31f9d7ddacd94f8388a84c205d2d5308d0520677e616"} Dec 04 06:12:15 crc kubenswrapper[4832]: I1204 06:12:15.294172 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.294150507 podStartE2EDuration="2.294150507s" podCreationTimestamp="2025-12-04 06:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:12:15.291623634 +0000 UTC m=+190.904441340" watchObservedRunningTime="2025-12-04 06:12:15.294150507 +0000 UTC m=+190.906968213" Dec 04 06:12:15 crc kubenswrapper[4832]: I1204 06:12:15.325303 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nbw42" podStartSLOduration=4.067837891 podStartE2EDuration="50.325285034s" podCreationTimestamp="2025-12-04 06:11:25 +0000 UTC" firstStartedPulling="2025-12-04 06:11:28.369750696 +0000 UTC m=+143.982568402" lastFinishedPulling="2025-12-04 06:12:14.627197839 +0000 UTC m=+190.240015545" observedRunningTime="2025-12-04 06:12:15.322566407 +0000 UTC m=+190.935384123" watchObservedRunningTime="2025-12-04 06:12:15.325285034 +0000 UTC m=+190.938102740" Dec 04 06:12:15 crc kubenswrapper[4832]: I1204 06:12:15.954041 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:12:15 crc kubenswrapper[4832]: I1204 06:12:15.954329 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:12:16 crc kubenswrapper[4832]: I1204 06:12:16.285729 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j295k" event={"ID":"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930","Type":"ContainerStarted","Data":"ceaf711f79b321c7c6e76716fb8a26de338ce6561720e7cf5cf55d3b04dbe0be"} Dec 04 06:12:16 crc kubenswrapper[4832]: I1204 06:12:16.288153 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlppg" event={"ID":"1abbca7a-e500-4733-9160-2a34dcfa0531","Type":"ContainerStarted","Data":"b03a193287298bee495cf84081ce2ace754ebc39c5bbc968f3785ac4e138cef4"} Dec 04 06:12:16 crc kubenswrapper[4832]: I1204 06:12:16.290049 4832 generic.go:334] "Generic (PLEG): container finished" podID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" containerID="999a7b14c95318cc644b8a3b50f50c12b14a43956aefa407674dd1505bb07ba6" exitCode=0 Dec 04 06:12:16 crc kubenswrapper[4832]: I1204 06:12:16.290087 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bs4tj" event={"ID":"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e","Type":"ContainerDied","Data":"999a7b14c95318cc644b8a3b50f50c12b14a43956aefa407674dd1505bb07ba6"} Dec 04 06:12:16 crc kubenswrapper[4832]: I1204 06:12:16.319109 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qss9l" podStartSLOduration=4.182667961 podStartE2EDuration="49.319088716s" podCreationTimestamp="2025-12-04 06:11:27 +0000 UTC" firstStartedPulling="2025-12-04 06:11:29.548591869 +0000 UTC m=+145.161409575" lastFinishedPulling="2025-12-04 06:12:14.685012624 +0000 UTC m=+190.297830330" observedRunningTime="2025-12-04 06:12:15.347639035 +0000 UTC m=+190.960456731" watchObservedRunningTime="2025-12-04 06:12:16.319088716 +0000 UTC m=+191.931906422" Dec 04 06:12:16 crc kubenswrapper[4832]: I1204 06:12:16.987691 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nbw42" podUID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" containerName="registry-server" probeResult="failure" output=< Dec 04 06:12:16 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Dec 04 06:12:16 crc kubenswrapper[4832]: > Dec 04 06:12:17 crc kubenswrapper[4832]: I1204 06:12:17.296214 4832 generic.go:334] "Generic (PLEG): container finished" podID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" containerID="ceaf711f79b321c7c6e76716fb8a26de338ce6561720e7cf5cf55d3b04dbe0be" exitCode=0 Dec 04 06:12:17 crc kubenswrapper[4832]: I1204 06:12:17.296291 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j295k" event={"ID":"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930","Type":"ContainerDied","Data":"ceaf711f79b321c7c6e76716fb8a26de338ce6561720e7cf5cf55d3b04dbe0be"} Dec 04 06:12:17 crc kubenswrapper[4832]: I1204 06:12:17.301000 4832 generic.go:334] "Generic (PLEG): container finished" podID="1abbca7a-e500-4733-9160-2a34dcfa0531" containerID="b03a193287298bee495cf84081ce2ace754ebc39c5bbc968f3785ac4e138cef4" exitCode=0 Dec 04 06:12:17 crc kubenswrapper[4832]: I1204 06:12:17.301157 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlppg" event={"ID":"1abbca7a-e500-4733-9160-2a34dcfa0531","Type":"ContainerDied","Data":"b03a193287298bee495cf84081ce2ace754ebc39c5bbc968f3785ac4e138cef4"} Dec 04 06:12:17 crc kubenswrapper[4832]: I1204 06:12:17.863962 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:12:17 crc kubenswrapper[4832]: I1204 06:12:17.864275 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:12:17 crc kubenswrapper[4832]: I1204 06:12:17.903075 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:12:19 crc kubenswrapper[4832]: I1204 06:12:19.312536 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bs4tj" event={"ID":"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e","Type":"ContainerStarted","Data":"d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac"} Dec 04 06:12:19 crc kubenswrapper[4832]: I1204 06:12:19.332546 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bs4tj" podStartSLOduration=3.995469666 podStartE2EDuration="52.332530992s" podCreationTimestamp="2025-12-04 06:11:27 +0000 UTC" firstStartedPulling="2025-12-04 06:11:29.580818393 +0000 UTC m=+145.193636099" lastFinishedPulling="2025-12-04 06:12:17.917879719 +0000 UTC m=+193.530697425" observedRunningTime="2025-12-04 06:12:19.329970329 +0000 UTC m=+194.942788035" watchObservedRunningTime="2025-12-04 06:12:19.332530992 +0000 UTC m=+194.945348688" Dec 04 06:12:19 crc kubenswrapper[4832]: I1204 06:12:19.355761 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:12:21 crc kubenswrapper[4832]: I1204 06:12:21.323141 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlppg" event={"ID":"1abbca7a-e500-4733-9160-2a34dcfa0531","Type":"ContainerStarted","Data":"0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88"} Dec 04 06:12:21 crc kubenswrapper[4832]: I1204 06:12:21.325884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j295k" event={"ID":"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930","Type":"ContainerStarted","Data":"a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f"} Dec 04 06:12:21 crc kubenswrapper[4832]: I1204 06:12:21.345773 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vlppg" podStartSLOduration=3.4002967809999998 podStartE2EDuration="53.345751266s" podCreationTimestamp="2025-12-04 06:11:28 +0000 UTC" firstStartedPulling="2025-12-04 06:11:30.663238039 +0000 UTC m=+146.276055745" lastFinishedPulling="2025-12-04 06:12:20.608692534 +0000 UTC m=+196.221510230" observedRunningTime="2025-12-04 06:12:21.342140878 +0000 UTC m=+196.954958584" watchObservedRunningTime="2025-12-04 06:12:21.345751266 +0000 UTC m=+196.958568972" Dec 04 06:12:21 crc kubenswrapper[4832]: I1204 06:12:21.360228 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j295k" podStartSLOduration=3.623593466 podStartE2EDuration="53.360208504s" podCreationTimestamp="2025-12-04 06:11:28 +0000 UTC" firstStartedPulling="2025-12-04 06:11:30.682212986 +0000 UTC m=+146.295030692" lastFinishedPulling="2025-12-04 06:12:20.418828024 +0000 UTC m=+196.031645730" observedRunningTime="2025-12-04 06:12:21.359479925 +0000 UTC m=+196.972297641" watchObservedRunningTime="2025-12-04 06:12:21.360208504 +0000 UTC m=+196.973026210" Dec 04 06:12:21 crc kubenswrapper[4832]: I1204 06:12:21.377983 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qss9l"] Dec 04 06:12:21 crc kubenswrapper[4832]: I1204 06:12:21.378233 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qss9l" podUID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" containerName="registry-server" containerID="cri-o://1a8d956ece5fe13a8c5071eaea35ae61461b1e8ec4a21957886e11b320cecab3" gracePeriod=2 Dec 04 06:12:22 crc kubenswrapper[4832]: I1204 06:12:22.338062 4832 generic.go:334] "Generic (PLEG): container finished" podID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" containerID="1a8d956ece5fe13a8c5071eaea35ae61461b1e8ec4a21957886e11b320cecab3" exitCode=0 Dec 04 06:12:22 crc kubenswrapper[4832]: I1204 06:12:22.338146 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qss9l" event={"ID":"e3733728-9f9d-45f8-af66-d0427bb7cfe1","Type":"ContainerDied","Data":"1a8d956ece5fe13a8c5071eaea35ae61461b1e8ec4a21957886e11b320cecab3"} Dec 04 06:12:22 crc kubenswrapper[4832]: I1204 06:12:22.592496 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:12:22 crc kubenswrapper[4832]: I1204 06:12:22.678104 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3733728-9f9d-45f8-af66-d0427bb7cfe1-utilities\") pod \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\" (UID: \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\") " Dec 04 06:12:22 crc kubenswrapper[4832]: I1204 06:12:22.678218 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3733728-9f9d-45f8-af66-d0427bb7cfe1-catalog-content\") pod \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\" (UID: \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\") " Dec 04 06:12:22 crc kubenswrapper[4832]: I1204 06:12:22.678291 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pwh8\" (UniqueName: \"kubernetes.io/projected/e3733728-9f9d-45f8-af66-d0427bb7cfe1-kube-api-access-4pwh8\") pod \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\" (UID: \"e3733728-9f9d-45f8-af66-d0427bb7cfe1\") " Dec 04 06:12:22 crc kubenswrapper[4832]: I1204 06:12:22.679270 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3733728-9f9d-45f8-af66-d0427bb7cfe1-utilities" (OuterVolumeSpecName: "utilities") pod "e3733728-9f9d-45f8-af66-d0427bb7cfe1" (UID: "e3733728-9f9d-45f8-af66-d0427bb7cfe1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:12:22 crc kubenswrapper[4832]: I1204 06:12:22.685888 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3733728-9f9d-45f8-af66-d0427bb7cfe1-kube-api-access-4pwh8" (OuterVolumeSpecName: "kube-api-access-4pwh8") pod "e3733728-9f9d-45f8-af66-d0427bb7cfe1" (UID: "e3733728-9f9d-45f8-af66-d0427bb7cfe1"). InnerVolumeSpecName "kube-api-access-4pwh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:12:22 crc kubenswrapper[4832]: I1204 06:12:22.702843 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3733728-9f9d-45f8-af66-d0427bb7cfe1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3733728-9f9d-45f8-af66-d0427bb7cfe1" (UID: "e3733728-9f9d-45f8-af66-d0427bb7cfe1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:12:22 crc kubenswrapper[4832]: I1204 06:12:22.779344 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3733728-9f9d-45f8-af66-d0427bb7cfe1-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:22 crc kubenswrapper[4832]: I1204 06:12:22.779382 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3733728-9f9d-45f8-af66-d0427bb7cfe1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:22 crc kubenswrapper[4832]: I1204 06:12:22.779411 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pwh8\" (UniqueName: \"kubernetes.io/projected/e3733728-9f9d-45f8-af66-d0427bb7cfe1-kube-api-access-4pwh8\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:23 crc kubenswrapper[4832]: I1204 06:12:23.345635 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qss9l" event={"ID":"e3733728-9f9d-45f8-af66-d0427bb7cfe1","Type":"ContainerDied","Data":"13a03cab2480d599603fe429af2240e0c41f941f4534195b18ed146a0f233e3d"} Dec 04 06:12:23 crc kubenswrapper[4832]: I1204 06:12:23.345687 4832 scope.go:117] "RemoveContainer" containerID="1a8d956ece5fe13a8c5071eaea35ae61461b1e8ec4a21957886e11b320cecab3" Dec 04 06:12:23 crc kubenswrapper[4832]: I1204 06:12:23.345705 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qss9l" Dec 04 06:12:23 crc kubenswrapper[4832]: I1204 06:12:23.363094 4832 scope.go:117] "RemoveContainer" containerID="55543fc6387936b2f243eef1c3fdf4a0096ff2ae56cafadbeffb898f680dff8e" Dec 04 06:12:23 crc kubenswrapper[4832]: I1204 06:12:23.366100 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qss9l"] Dec 04 06:12:23 crc kubenswrapper[4832]: I1204 06:12:23.368775 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qss9l"] Dec 04 06:12:23 crc kubenswrapper[4832]: I1204 06:12:23.379385 4832 scope.go:117] "RemoveContainer" containerID="ddf01246bd73bb3fe44eb075f21a082205589ce1d299d2851f8afe5e59fde528" Dec 04 06:12:24 crc kubenswrapper[4832]: I1204 06:12:24.717915 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" path="/var/lib/kubelet/pods/e3733728-9f9d-45f8-af66-d0427bb7cfe1/volumes" Dec 04 06:12:25 crc kubenswrapper[4832]: I1204 06:12:25.994627 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:12:26 crc kubenswrapper[4832]: I1204 06:12:26.038418 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:12:27 crc kubenswrapper[4832]: I1204 06:12:27.425801 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:12:27 crc kubenswrapper[4832]: I1204 06:12:27.426123 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:12:27 crc kubenswrapper[4832]: I1204 06:12:27.461574 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:12:27 crc kubenswrapper[4832]: I1204 06:12:27.977244 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbw42"] Dec 04 06:12:27 crc kubenswrapper[4832]: I1204 06:12:27.977541 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nbw42" podUID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" containerName="registry-server" containerID="cri-o://91facbd5dadbceb19afc31f9d7ddacd94f8388a84c205d2d5308d0520677e616" gracePeriod=2 Dec 04 06:12:28 crc kubenswrapper[4832]: I1204 06:12:28.410373 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:12:28 crc kubenswrapper[4832]: I1204 06:12:28.514865 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:12:28 crc kubenswrapper[4832]: I1204 06:12:28.515517 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:12:28 crc kubenswrapper[4832]: I1204 06:12:28.550528 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:12:29 crc kubenswrapper[4832]: I1204 06:12:29.014875 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:12:29 crc kubenswrapper[4832]: I1204 06:12:29.014936 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:12:29 crc kubenswrapper[4832]: I1204 06:12:29.050737 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:12:29 crc kubenswrapper[4832]: I1204 06:12:29.419493 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:12:29 crc kubenswrapper[4832]: I1204 06:12:29.421508 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:12:30 crc kubenswrapper[4832]: I1204 06:12:30.388445 4832 generic.go:334] "Generic (PLEG): container finished" podID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" containerID="91facbd5dadbceb19afc31f9d7ddacd94f8388a84c205d2d5308d0520677e616" exitCode=0 Dec 04 06:12:30 crc kubenswrapper[4832]: I1204 06:12:30.389491 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbw42" event={"ID":"29b70a28-7b0c-486b-a0f9-76e2e877cf26","Type":"ContainerDied","Data":"91facbd5dadbceb19afc31f9d7ddacd94f8388a84c205d2d5308d0520677e616"} Dec 04 06:12:30 crc kubenswrapper[4832]: I1204 06:12:30.997064 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.084168 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkrwv\" (UniqueName: \"kubernetes.io/projected/29b70a28-7b0c-486b-a0f9-76e2e877cf26-kube-api-access-hkrwv\") pod \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\" (UID: \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\") " Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.084336 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b70a28-7b0c-486b-a0f9-76e2e877cf26-catalog-content\") pod \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\" (UID: \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\") " Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.084479 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b70a28-7b0c-486b-a0f9-76e2e877cf26-utilities\") pod \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\" (UID: \"29b70a28-7b0c-486b-a0f9-76e2e877cf26\") " Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.085263 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b70a28-7b0c-486b-a0f9-76e2e877cf26-utilities" (OuterVolumeSpecName: "utilities") pod "29b70a28-7b0c-486b-a0f9-76e2e877cf26" (UID: "29b70a28-7b0c-486b-a0f9-76e2e877cf26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.094915 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b70a28-7b0c-486b-a0f9-76e2e877cf26-kube-api-access-hkrwv" (OuterVolumeSpecName: "kube-api-access-hkrwv") pod "29b70a28-7b0c-486b-a0f9-76e2e877cf26" (UID: "29b70a28-7b0c-486b-a0f9-76e2e877cf26"). InnerVolumeSpecName "kube-api-access-hkrwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.141499 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b70a28-7b0c-486b-a0f9-76e2e877cf26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29b70a28-7b0c-486b-a0f9-76e2e877cf26" (UID: "29b70a28-7b0c-486b-a0f9-76e2e877cf26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.185987 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b70a28-7b0c-486b-a0f9-76e2e877cf26-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.186027 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkrwv\" (UniqueName: \"kubernetes.io/projected/29b70a28-7b0c-486b-a0f9-76e2e877cf26-kube-api-access-hkrwv\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.186039 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b70a28-7b0c-486b-a0f9-76e2e877cf26-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.395593 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbw42" event={"ID":"29b70a28-7b0c-486b-a0f9-76e2e877cf26","Type":"ContainerDied","Data":"976d89c39a16597bf7445af2877f3fa3a73994d4ee0f7831a5e7afc12e1fd340"} Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.395618 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbw42" Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.395656 4832 scope.go:117] "RemoveContainer" containerID="91facbd5dadbceb19afc31f9d7ddacd94f8388a84c205d2d5308d0520677e616" Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.415373 4832 scope.go:117] "RemoveContainer" containerID="69692ccc57b698c9501b3a123ee19cc5ec6a17835714decbcf1560b5198cf074" Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.425987 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbw42"] Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.429797 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nbw42"] Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.458235 4832 scope.go:117] "RemoveContainer" containerID="65e3a4c5c32690ccb40c5991ec7a78df39a53415da62e00c4b4cf870d767f66f" Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.778823 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vlppg"] Dec 04 06:12:31 crc kubenswrapper[4832]: I1204 06:12:31.779299 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vlppg" podUID="1abbca7a-e500-4733-9160-2a34dcfa0531" containerName="registry-server" containerID="cri-o://0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88" gracePeriod=2 Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.123426 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.196229 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abbca7a-e500-4733-9160-2a34dcfa0531-utilities\") pod \"1abbca7a-e500-4733-9160-2a34dcfa0531\" (UID: \"1abbca7a-e500-4733-9160-2a34dcfa0531\") " Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.196280 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abbca7a-e500-4733-9160-2a34dcfa0531-catalog-content\") pod \"1abbca7a-e500-4733-9160-2a34dcfa0531\" (UID: \"1abbca7a-e500-4733-9160-2a34dcfa0531\") " Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.196424 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bwcq\" (UniqueName: \"kubernetes.io/projected/1abbca7a-e500-4733-9160-2a34dcfa0531-kube-api-access-7bwcq\") pod \"1abbca7a-e500-4733-9160-2a34dcfa0531\" (UID: \"1abbca7a-e500-4733-9160-2a34dcfa0531\") " Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.197187 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1abbca7a-e500-4733-9160-2a34dcfa0531-utilities" (OuterVolumeSpecName: "utilities") pod "1abbca7a-e500-4733-9160-2a34dcfa0531" (UID: "1abbca7a-e500-4733-9160-2a34dcfa0531"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.203859 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abbca7a-e500-4733-9160-2a34dcfa0531-kube-api-access-7bwcq" (OuterVolumeSpecName: "kube-api-access-7bwcq") pod "1abbca7a-e500-4733-9160-2a34dcfa0531" (UID: "1abbca7a-e500-4733-9160-2a34dcfa0531"). InnerVolumeSpecName "kube-api-access-7bwcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.297627 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bwcq\" (UniqueName: \"kubernetes.io/projected/1abbca7a-e500-4733-9160-2a34dcfa0531-kube-api-access-7bwcq\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.297659 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abbca7a-e500-4733-9160-2a34dcfa0531-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.311358 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1abbca7a-e500-4733-9160-2a34dcfa0531-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1abbca7a-e500-4733-9160-2a34dcfa0531" (UID: "1abbca7a-e500-4733-9160-2a34dcfa0531"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.398342 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abbca7a-e500-4733-9160-2a34dcfa0531-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.403991 4832 generic.go:334] "Generic (PLEG): container finished" podID="1abbca7a-e500-4733-9160-2a34dcfa0531" containerID="0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88" exitCode=0 Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.404036 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlppg" event={"ID":"1abbca7a-e500-4733-9160-2a34dcfa0531","Type":"ContainerDied","Data":"0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88"} Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.404052 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vlppg" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.404077 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vlppg" event={"ID":"1abbca7a-e500-4733-9160-2a34dcfa0531","Type":"ContainerDied","Data":"e18e69d262e4505a81ad9cf49cf85460a802aac7a9be32834405cff70c2f1f1d"} Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.404098 4832 scope.go:117] "RemoveContainer" containerID="0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.417140 4832 scope.go:117] "RemoveContainer" containerID="b03a193287298bee495cf84081ce2ace754ebc39c5bbc968f3785ac4e138cef4" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.436453 4832 scope.go:117] "RemoveContainer" containerID="7d3a862769076e22a2c54a1abdcde4a1a0263260ac14f8facd2df7ff1e451322" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.436553 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vlppg"] Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.436624 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vlppg"] Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.451294 4832 scope.go:117] "RemoveContainer" containerID="0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88" Dec 04 06:12:32 crc kubenswrapper[4832]: E1204 06:12:32.451746 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88\": container with ID starting with 0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88 not found: ID does not exist" containerID="0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.451779 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88"} err="failed to get container status \"0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88\": rpc error: code = NotFound desc = could not find container \"0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88\": container with ID starting with 0155d69aa6f64ff0cfb8743546b642a3e109346cd8ec0b531d21fa5191714d88 not found: ID does not exist" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.451801 4832 scope.go:117] "RemoveContainer" containerID="b03a193287298bee495cf84081ce2ace754ebc39c5bbc968f3785ac4e138cef4" Dec 04 06:12:32 crc kubenswrapper[4832]: E1204 06:12:32.452077 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03a193287298bee495cf84081ce2ace754ebc39c5bbc968f3785ac4e138cef4\": container with ID starting with b03a193287298bee495cf84081ce2ace754ebc39c5bbc968f3785ac4e138cef4 not found: ID does not exist" containerID="b03a193287298bee495cf84081ce2ace754ebc39c5bbc968f3785ac4e138cef4" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.452118 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03a193287298bee495cf84081ce2ace754ebc39c5bbc968f3785ac4e138cef4"} err="failed to get container status \"b03a193287298bee495cf84081ce2ace754ebc39c5bbc968f3785ac4e138cef4\": rpc error: code = NotFound desc = could not find container \"b03a193287298bee495cf84081ce2ace754ebc39c5bbc968f3785ac4e138cef4\": container with ID starting with b03a193287298bee495cf84081ce2ace754ebc39c5bbc968f3785ac4e138cef4 not found: ID does not exist" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.452145 4832 scope.go:117] "RemoveContainer" containerID="7d3a862769076e22a2c54a1abdcde4a1a0263260ac14f8facd2df7ff1e451322" Dec 04 06:12:32 crc kubenswrapper[4832]: E1204 06:12:32.452548 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d3a862769076e22a2c54a1abdcde4a1a0263260ac14f8facd2df7ff1e451322\": container with ID starting with 7d3a862769076e22a2c54a1abdcde4a1a0263260ac14f8facd2df7ff1e451322 not found: ID does not exist" containerID="7d3a862769076e22a2c54a1abdcde4a1a0263260ac14f8facd2df7ff1e451322" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.452571 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d3a862769076e22a2c54a1abdcde4a1a0263260ac14f8facd2df7ff1e451322"} err="failed to get container status \"7d3a862769076e22a2c54a1abdcde4a1a0263260ac14f8facd2df7ff1e451322\": rpc error: code = NotFound desc = could not find container \"7d3a862769076e22a2c54a1abdcde4a1a0263260ac14f8facd2df7ff1e451322\": container with ID starting with 7d3a862769076e22a2c54a1abdcde4a1a0263260ac14f8facd2df7ff1e451322 not found: ID does not exist" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.717131 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1abbca7a-e500-4733-9160-2a34dcfa0531" path="/var/lib/kubelet/pods/1abbca7a-e500-4733-9160-2a34dcfa0531/volumes" Dec 04 06:12:32 crc kubenswrapper[4832]: I1204 06:12:32.717841 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" path="/var/lib/kubelet/pods/29b70a28-7b0c-486b-a0f9-76e2e877cf26/volumes" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.150371 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" podUID="d1bc185a-fac5-4103-947a-d3d660802249" containerName="oauth-openshift" containerID="cri-o://81abaebb50da5e7347514ea5a4711dc7c0f0157872f50947a4a24cd3b202adde" gracePeriod=15 Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.362423 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.362493 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.362537 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.363082 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.363160 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2" gracePeriod=600 Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.424122 4832 generic.go:334] "Generic (PLEG): container finished" podID="d1bc185a-fac5-4103-947a-d3d660802249" containerID="81abaebb50da5e7347514ea5a4711dc7c0f0157872f50947a4a24cd3b202adde" exitCode=0 Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.424176 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" event={"ID":"d1bc185a-fac5-4103-947a-d3d660802249","Type":"ContainerDied","Data":"81abaebb50da5e7347514ea5a4711dc7c0f0157872f50947a4a24cd3b202adde"} Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.583921 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.640431 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-error\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.640474 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-login\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.640500 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1bc185a-fac5-4103-947a-d3d660802249-audit-dir\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.640539 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-audit-policies\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.640563 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-router-certs\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.640585 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-trusted-ca-bundle\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.640609 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-session\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.640763 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1bc185a-fac5-4103-947a-d3d660802249-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.641635 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.641645 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.641661 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.641685 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-service-ca\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.641728 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-cliconfig\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.641753 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-idp-0-file-data\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.641792 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-serving-cert\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.641818 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-provider-selection\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.641844 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-ocp-branding-template\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.641879 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gmcc\" (UniqueName: \"kubernetes.io/projected/d1bc185a-fac5-4103-947a-d3d660802249-kube-api-access-5gmcc\") pod \"d1bc185a-fac5-4103-947a-d3d660802249\" (UID: \"d1bc185a-fac5-4103-947a-d3d660802249\") " Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.642186 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.642202 4832 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d1bc185a-fac5-4103-947a-d3d660802249-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.642213 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.642225 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.643721 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.648917 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.649282 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.649880 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.651624 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.657639 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.657951 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.658162 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.660875 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.661061 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1bc185a-fac5-4103-947a-d3d660802249-kube-api-access-5gmcc" (OuterVolumeSpecName: "kube-api-access-5gmcc") pod "d1bc185a-fac5-4103-947a-d3d660802249" (UID: "d1bc185a-fac5-4103-947a-d3d660802249"). InnerVolumeSpecName "kube-api-access-5gmcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.744043 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.744087 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.744102 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.744115 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.744128 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.744161 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.744175 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.744203 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gmcc\" (UniqueName: \"kubernetes.io/projected/d1bc185a-fac5-4103-947a-d3d660802249-kube-api-access-5gmcc\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.744216 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:35 crc kubenswrapper[4832]: I1204 06:12:35.744229 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d1bc185a-fac5-4103-947a-d3d660802249-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:36 crc kubenswrapper[4832]: I1204 06:12:36.429810 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" Dec 04 06:12:36 crc kubenswrapper[4832]: I1204 06:12:36.429798 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vcj7x" event={"ID":"d1bc185a-fac5-4103-947a-d3d660802249","Type":"ContainerDied","Data":"d21da918ba497fef77640b463931850815dd050b28b74dec54104d698d3cbfaa"} Dec 04 06:12:36 crc kubenswrapper[4832]: I1204 06:12:36.430227 4832 scope.go:117] "RemoveContainer" containerID="81abaebb50da5e7347514ea5a4711dc7c0f0157872f50947a4a24cd3b202adde" Dec 04 06:12:36 crc kubenswrapper[4832]: I1204 06:12:36.433465 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2" exitCode=0 Dec 04 06:12:36 crc kubenswrapper[4832]: I1204 06:12:36.433515 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2"} Dec 04 06:12:36 crc kubenswrapper[4832]: I1204 06:12:36.433544 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"e74b17e6dc40ea52d596a660c11c2fce066900038d8d935b5047def34efc0e45"} Dec 04 06:12:36 crc kubenswrapper[4832]: I1204 06:12:36.468462 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vcj7x"] Dec 04 06:12:36 crc kubenswrapper[4832]: I1204 06:12:36.473611 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vcj7x"] Dec 04 06:12:36 crc kubenswrapper[4832]: I1204 06:12:36.722144 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1bc185a-fac5-4103-947a-d3d660802249" path="/var/lib/kubelet/pods/d1bc185a-fac5-4103-947a-d3d660802249/volumes" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.643788 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-857d94f549-wssxv"] Dec 04 06:12:39 crc kubenswrapper[4832]: E1204 06:12:39.644696 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abbca7a-e500-4733-9160-2a34dcfa0531" containerName="registry-server" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.644717 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abbca7a-e500-4733-9160-2a34dcfa0531" containerName="registry-server" Dec 04 06:12:39 crc kubenswrapper[4832]: E1204 06:12:39.644911 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abbca7a-e500-4733-9160-2a34dcfa0531" containerName="extract-content" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.644924 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abbca7a-e500-4733-9160-2a34dcfa0531" containerName="extract-content" Dec 04 06:12:39 crc kubenswrapper[4832]: E1204 06:12:39.644993 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" containerName="extract-utilities" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.645008 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" containerName="extract-utilities" Dec 04 06:12:39 crc kubenswrapper[4832]: E1204 06:12:39.645027 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" containerName="extract-content" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.645039 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" containerName="extract-content" Dec 04 06:12:39 crc kubenswrapper[4832]: E1204 06:12:39.645059 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" containerName="registry-server" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.645070 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" containerName="registry-server" Dec 04 06:12:39 crc kubenswrapper[4832]: E1204 06:12:39.645089 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" containerName="extract-content" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.645101 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" containerName="extract-content" Dec 04 06:12:39 crc kubenswrapper[4832]: E1204 06:12:39.645118 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bc185a-fac5-4103-947a-d3d660802249" containerName="oauth-openshift" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.645129 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bc185a-fac5-4103-947a-d3d660802249" containerName="oauth-openshift" Dec 04 06:12:39 crc kubenswrapper[4832]: E1204 06:12:39.645145 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" containerName="extract-utilities" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.645157 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" containerName="extract-utilities" Dec 04 06:12:39 crc kubenswrapper[4832]: E1204 06:12:39.645170 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" containerName="registry-server" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.645182 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" containerName="registry-server" Dec 04 06:12:39 crc kubenswrapper[4832]: E1204 06:12:39.645197 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abbca7a-e500-4733-9160-2a34dcfa0531" containerName="extract-utilities" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.645208 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abbca7a-e500-4733-9160-2a34dcfa0531" containerName="extract-utilities" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.645374 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1bc185a-fac5-4103-947a-d3d660802249" containerName="oauth-openshift" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.645429 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abbca7a-e500-4733-9160-2a34dcfa0531" containerName="registry-server" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.645445 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b70a28-7b0c-486b-a0f9-76e2e877cf26" containerName="registry-server" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.645466 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3733728-9f9d-45f8-af66-d0427bb7cfe1" containerName="registry-server" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.646031 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.653153 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.653179 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.653227 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.653650 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.653679 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.653817 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.654199 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.660413 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.660596 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.660666 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.661068 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.661254 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.664996 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-857d94f549-wssxv"] Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.669818 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.683432 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.685743 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693296 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkxqn\" (UniqueName: \"kubernetes.io/projected/51b49f36-5d58-475c-a3c3-36f6d9a29649-kube-api-access-wkxqn\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693340 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-user-template-login\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693415 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51b49f36-5d58-475c-a3c3-36f6d9a29649-audit-dir\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693445 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-serving-cert\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693465 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-service-ca\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693486 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693512 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-session\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693565 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51b49f36-5d58-475c-a3c3-36f6d9a29649-audit-policies\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693670 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-cliconfig\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693703 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-router-certs\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693728 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693756 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-user-template-error\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693779 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.693803 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795054 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795128 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795219 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkxqn\" (UniqueName: \"kubernetes.io/projected/51b49f36-5d58-475c-a3c3-36f6d9a29649-kube-api-access-wkxqn\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795256 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-user-template-login\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795318 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51b49f36-5d58-475c-a3c3-36f6d9a29649-audit-dir\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795358 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-serving-cert\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795412 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-service-ca\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795439 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795466 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-session\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795519 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51b49f36-5d58-475c-a3c3-36f6d9a29649-audit-policies\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795549 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-cliconfig\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795589 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-router-certs\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795616 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.795642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-user-template-error\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.797267 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-cliconfig\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.797384 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51b49f36-5d58-475c-a3c3-36f6d9a29649-audit-policies\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.797377 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51b49f36-5d58-475c-a3c3-36f6d9a29649-audit-dir\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.798087 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-service-ca\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.798177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.802199 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-router-certs\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.802543 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-user-template-error\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.802626 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.803820 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-session\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.803850 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-system-serving-cert\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.804263 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.804409 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.805536 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/51b49f36-5d58-475c-a3c3-36f6d9a29649-v4-0-config-user-template-login\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.816255 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkxqn\" (UniqueName: \"kubernetes.io/projected/51b49f36-5d58-475c-a3c3-36f6d9a29649-kube-api-access-wkxqn\") pod \"oauth-openshift-857d94f549-wssxv\" (UID: \"51b49f36-5d58-475c-a3c3-36f6d9a29649\") " pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:39 crc kubenswrapper[4832]: I1204 06:12:39.964535 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:40 crc kubenswrapper[4832]: I1204 06:12:40.336296 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-857d94f549-wssxv"] Dec 04 06:12:41 crc kubenswrapper[4832]: I1204 06:12:41.271117 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" event={"ID":"51b49f36-5d58-475c-a3c3-36f6d9a29649","Type":"ContainerStarted","Data":"d68cb0872fa08f8d11cc181fb46f5c01b43f052cd91025792b647e276f85a066"} Dec 04 06:12:41 crc kubenswrapper[4832]: I1204 06:12:41.271478 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" event={"ID":"51b49f36-5d58-475c-a3c3-36f6d9a29649","Type":"ContainerStarted","Data":"b410d4427fd880553a96b13f827e62e53fb868f6e5abcf8709c84158b4e1a81e"} Dec 04 06:12:41 crc kubenswrapper[4832]: I1204 06:12:41.294618 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" podStartSLOduration=31.294598459 podStartE2EDuration="31.294598459s" podCreationTimestamp="2025-12-04 06:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:12:41.292692827 +0000 UTC m=+216.905510553" watchObservedRunningTime="2025-12-04 06:12:41.294598459 +0000 UTC m=+216.907416175" Dec 04 06:12:42 crc kubenswrapper[4832]: I1204 06:12:42.275721 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:42 crc kubenswrapper[4832]: I1204 06:12:42.279869 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-857d94f549-wssxv" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.392297 4832 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.392845 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef" gracePeriod=15 Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.392892 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2" gracePeriod=15 Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.392953 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241" gracePeriod=15 Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.392937 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862" gracePeriod=15 Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.393022 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37" gracePeriod=15 Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.395664 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 06:12:52 crc kubenswrapper[4832]: E1204 06:12:52.395930 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.395997 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 04 06:12:52 crc kubenswrapper[4832]: E1204 06:12:52.396077 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.396142 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 04 06:12:52 crc kubenswrapper[4832]: E1204 06:12:52.396202 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.396258 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 04 06:12:52 crc kubenswrapper[4832]: E1204 06:12:52.396310 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.396365 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 06:12:52 crc kubenswrapper[4832]: E1204 06:12:52.396450 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.396520 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 04 06:12:52 crc kubenswrapper[4832]: E1204 06:12:52.396581 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.396631 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.396787 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.396868 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.396951 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.397012 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.397067 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 04 06:12:52 crc kubenswrapper[4832]: E1204 06:12:52.397211 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.397275 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.397474 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.398844 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.400334 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.403769 4832 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.460936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.460995 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.461029 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.461046 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.461074 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.461103 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.461123 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.461145 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562512 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562560 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562593 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562623 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562652 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562671 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562716 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562745 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562821 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562862 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562887 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562911 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562933 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562956 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.562978 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:52 crc kubenswrapper[4832]: I1204 06:12:52.563000 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:12:53 crc kubenswrapper[4832]: I1204 06:12:53.336996 4832 generic.go:334] "Generic (PLEG): container finished" podID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" containerID="e999f925c77dce9ae7a8c696033d22b18d6b153e2d18001086121a1046d189d0" exitCode=0 Dec 04 06:12:53 crc kubenswrapper[4832]: I1204 06:12:53.337124 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d","Type":"ContainerDied","Data":"e999f925c77dce9ae7a8c696033d22b18d6b153e2d18001086121a1046d189d0"} Dec 04 06:12:53 crc kubenswrapper[4832]: I1204 06:12:53.337937 4832 status_manager.go:851] "Failed to get status for pod" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:53 crc kubenswrapper[4832]: I1204 06:12:53.339518 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 04 06:12:53 crc kubenswrapper[4832]: I1204 06:12:53.340766 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 06:12:53 crc kubenswrapper[4832]: I1204 06:12:53.341629 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2" exitCode=0 Dec 04 06:12:53 crc kubenswrapper[4832]: I1204 06:12:53.341649 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37" exitCode=0 Dec 04 06:12:53 crc kubenswrapper[4832]: I1204 06:12:53.341659 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862" exitCode=0 Dec 04 06:12:53 crc kubenswrapper[4832]: I1204 06:12:53.341665 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241" exitCode=2 Dec 04 06:12:53 crc kubenswrapper[4832]: I1204 06:12:53.341699 4832 scope.go:117] "RemoveContainer" containerID="64d1933a70753e598cf480c15d0a06614c2e04e2ac976e62b8ad4065a3b0c97e" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.350020 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.643301 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.644057 4832 status_manager.go:851] "Failed to get status for pod" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.712880 4832 status_manager.go:851] "Failed to get status for pod" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.763530 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.764385 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.765003 4832 status_manager.go:851] "Failed to get status for pod" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.765271 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.790612 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-var-lock\") pod \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\" (UID: \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\") " Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.790783 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-var-lock" (OuterVolumeSpecName: "var-lock") pod "c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" (UID: "c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.790841 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-kube-api-access\") pod \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\" (UID: \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\") " Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.790878 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-kubelet-dir\") pod \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\" (UID: \"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d\") " Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.791076 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" (UID: "c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.791526 4832 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-var-lock\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.791575 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.796878 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" (UID: "c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.892033 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.892088 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.892116 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.892442 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.892500 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.892602 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.892633 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.994079 4832 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.994108 4832 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:54 crc kubenswrapper[4832]: I1204 06:12:54.994116 4832 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.357829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d","Type":"ContainerDied","Data":"e871b6822f703039d4f08a1bac965e824a6c7d62cd4c01da5d065502c390095c"} Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.357887 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e871b6822f703039d4f08a1bac965e824a6c7d62cd4c01da5d065502c390095c" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.357848 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.360807 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.361642 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef" exitCode=0 Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.361701 4832 scope.go:117] "RemoveContainer" containerID="edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.361847 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.372510 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.373343 4832 status_manager.go:851] "Failed to get status for pod" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.379219 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.379605 4832 status_manager.go:851] "Failed to get status for pod" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.386036 4832 scope.go:117] "RemoveContainer" containerID="fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.398238 4832 scope.go:117] "RemoveContainer" containerID="1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.412068 4832 scope.go:117] "RemoveContainer" containerID="4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.424616 4832 scope.go:117] "RemoveContainer" containerID="dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.437769 4832 scope.go:117] "RemoveContainer" containerID="aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.465862 4832 scope.go:117] "RemoveContainer" containerID="edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2" Dec 04 06:12:55 crc kubenswrapper[4832]: E1204 06:12:55.466870 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\": container with ID starting with edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2 not found: ID does not exist" containerID="edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.466916 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2"} err="failed to get container status \"edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\": rpc error: code = NotFound desc = could not find container \"edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2\": container with ID starting with edbc67663371cadb3abd81ce69e8857ad4de13c766af66b7987cfc04650e3eb2 not found: ID does not exist" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.466941 4832 scope.go:117] "RemoveContainer" containerID="fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37" Dec 04 06:12:55 crc kubenswrapper[4832]: E1204 06:12:55.467172 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\": container with ID starting with fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37 not found: ID does not exist" containerID="fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.467200 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37"} err="failed to get container status \"fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\": rpc error: code = NotFound desc = could not find container \"fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37\": container with ID starting with fd89ae3b3f1a8c10e5a640ef7aa1c5ed769ecef0e692b11996e26224cd572b37 not found: ID does not exist" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.467213 4832 scope.go:117] "RemoveContainer" containerID="1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862" Dec 04 06:12:55 crc kubenswrapper[4832]: E1204 06:12:55.467444 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\": container with ID starting with 1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862 not found: ID does not exist" containerID="1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.467472 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862"} err="failed to get container status \"1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\": rpc error: code = NotFound desc = could not find container \"1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862\": container with ID starting with 1e3c06513ba7aac939cf6b4f4a8f04a37a1086acb7017803dcaf9a675d0bf862 not found: ID does not exist" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.467484 4832 scope.go:117] "RemoveContainer" containerID="4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241" Dec 04 06:12:55 crc kubenswrapper[4832]: E1204 06:12:55.467708 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\": container with ID starting with 4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241 not found: ID does not exist" containerID="4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.467739 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241"} err="failed to get container status \"4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\": rpc error: code = NotFound desc = could not find container \"4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241\": container with ID starting with 4e89e17350d5854d8d949b3874d2a83eee6dea3097d460bc9f3d07748f40f241 not found: ID does not exist" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.467756 4832 scope.go:117] "RemoveContainer" containerID="dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef" Dec 04 06:12:55 crc kubenswrapper[4832]: E1204 06:12:55.467963 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\": container with ID starting with dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef not found: ID does not exist" containerID="dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.467992 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef"} err="failed to get container status \"dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\": rpc error: code = NotFound desc = could not find container \"dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef\": container with ID starting with dcaa58e2d36a40fce191d075a24b48c6719da3213ec841e4d708d24142e48eef not found: ID does not exist" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.468009 4832 scope.go:117] "RemoveContainer" containerID="aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3" Dec 04 06:12:55 crc kubenswrapper[4832]: E1204 06:12:55.468207 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\": container with ID starting with aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3 not found: ID does not exist" containerID="aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3" Dec 04 06:12:55 crc kubenswrapper[4832]: I1204 06:12:55.468233 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3"} err="failed to get container status \"aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\": rpc error: code = NotFound desc = could not find container \"aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3\": container with ID starting with aab60b7704259d425098aa70618e032510dda0ad842b226058915e85d48a27b3 not found: ID does not exist" Dec 04 06:12:56 crc kubenswrapper[4832]: I1204 06:12:56.717087 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 04 06:12:57 crc kubenswrapper[4832]: E1204 06:12:57.437875 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:57 crc kubenswrapper[4832]: I1204 06:12:57.438235 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:57 crc kubenswrapper[4832]: W1204 06:12:57.481267 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-2d1a3895c270166b51c474a16e2fad3bb8b6359e90ad7f236bf23decac45f190 WatchSource:0}: Error finding container 2d1a3895c270166b51c474a16e2fad3bb8b6359e90ad7f236bf23decac45f190: Status 404 returned error can't find the container with id 2d1a3895c270166b51c474a16e2fad3bb8b6359e90ad7f236bf23decac45f190 Dec 04 06:12:57 crc kubenswrapper[4832]: E1204 06:12:57.484907 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187dee6c3353f589 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-04 06:12:57.484121481 +0000 UTC m=+233.096939187,LastTimestamp:2025-12-04 06:12:57.484121481 +0000 UTC m=+233.096939187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 04 06:12:58 crc kubenswrapper[4832]: I1204 06:12:58.380412 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3e28dafd6041538a60b1bed7a4fa3c6d76536ea6667365d9a70d88ec5e374ad3"} Dec 04 06:12:58 crc kubenswrapper[4832]: I1204 06:12:58.380465 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2d1a3895c270166b51c474a16e2fad3bb8b6359e90ad7f236bf23decac45f190"} Dec 04 06:12:58 crc kubenswrapper[4832]: E1204 06:12:58.381196 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:12:58 crc kubenswrapper[4832]: I1204 06:12:58.381821 4832 status_manager.go:851] "Failed to get status for pod" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:59 crc kubenswrapper[4832]: E1204 06:12:59.229892 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187dee6c3353f589 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-04 06:12:57.484121481 +0000 UTC m=+233.096939187,LastTimestamp:2025-12-04 06:12:57.484121481 +0000 UTC m=+233.096939187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 04 06:12:59 crc kubenswrapper[4832]: E1204 06:12:59.804533 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:59 crc kubenswrapper[4832]: E1204 06:12:59.804792 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:59 crc kubenswrapper[4832]: E1204 06:12:59.805034 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:59 crc kubenswrapper[4832]: E1204 06:12:59.805232 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:59 crc kubenswrapper[4832]: E1204 06:12:59.805533 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:12:59 crc kubenswrapper[4832]: I1204 06:12:59.805559 4832 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 04 06:12:59 crc kubenswrapper[4832]: E1204 06:12:59.805873 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Dec 04 06:13:00 crc kubenswrapper[4832]: E1204 06:13:00.007548 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Dec 04 06:13:00 crc kubenswrapper[4832]: E1204 06:13:00.408441 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Dec 04 06:13:00 crc kubenswrapper[4832]: E1204 06:13:00.999549 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:13:00Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:13:00Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:13:00Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T06:13:00Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:13:01 crc kubenswrapper[4832]: E1204 06:13:01.000229 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:13:01 crc kubenswrapper[4832]: E1204 06:13:01.000494 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:13:01 crc kubenswrapper[4832]: E1204 06:13:01.000691 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:13:01 crc kubenswrapper[4832]: E1204 06:13:01.000966 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:13:01 crc kubenswrapper[4832]: E1204 06:13:01.000987 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 06:13:01 crc kubenswrapper[4832]: E1204 06:13:01.210175 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Dec 04 06:13:02 crc kubenswrapper[4832]: E1204 06:13:02.810853 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="3.2s" Dec 04 06:13:04 crc kubenswrapper[4832]: I1204 06:13:04.714159 4832 status_manager.go:851] "Failed to get status for pod" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:13:06 crc kubenswrapper[4832]: E1204 06:13:06.012646 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="6.4s" Dec 04 06:13:06 crc kubenswrapper[4832]: I1204 06:13:06.710831 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:13:06 crc kubenswrapper[4832]: I1204 06:13:06.711809 4832 status_manager.go:851] "Failed to get status for pod" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:13:06 crc kubenswrapper[4832]: I1204 06:13:06.732114 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b9f33b2b-3ebe-4107-96a0-40d7892a597d" Dec 04 06:13:06 crc kubenswrapper[4832]: I1204 06:13:06.732186 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b9f33b2b-3ebe-4107-96a0-40d7892a597d" Dec 04 06:13:06 crc kubenswrapper[4832]: E1204 06:13:06.732822 4832 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:13:06 crc kubenswrapper[4832]: I1204 06:13:06.733646 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.427213 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.427851 4832 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce" exitCode=1 Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.428009 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce"} Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.428738 4832 scope.go:117] "RemoveContainer" containerID="2900ada324750d9beccd09ab83ddbbd05099c445374a07207f9433abe459bbce" Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.429862 4832 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.430226 4832 status_manager.go:851] "Failed to get status for pod" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.444840 4832 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f735795212802a8035509da43ad7b33ed516e2eba049c1496b760e1cb75e384a" exitCode=0 Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.444937 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f735795212802a8035509da43ad7b33ed516e2eba049c1496b760e1cb75e384a"} Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.445008 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aa28916b0005583b1283aff45e17231f21329724c07b55f07b1cc350c087fe07"} Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.445351 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b9f33b2b-3ebe-4107-96a0-40d7892a597d" Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.445367 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b9f33b2b-3ebe-4107-96a0-40d7892a597d" Dec 04 06:13:07 crc kubenswrapper[4832]: E1204 06:13:07.445998 4832 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.446102 4832 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:13:07 crc kubenswrapper[4832]: I1204 06:13:07.446271 4832 status_manager.go:851] "Failed to get status for pod" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Dec 04 06:13:08 crc kubenswrapper[4832]: I1204 06:13:08.456667 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9943f7a1a8d40ee44a55b4bc6ad8f5d42a4f607e977873c2cf4538d191dadc80"} Dec 04 06:13:08 crc kubenswrapper[4832]: I1204 06:13:08.457369 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9962063456c420020d1de69983c45df854cc33891384b17824c8c5f12f4e6082"} Dec 04 06:13:08 crc kubenswrapper[4832]: I1204 06:13:08.457478 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bc0b5d126e71a739bd6588bff63213213d1faeae53f7736df30addedf112ee08"} Dec 04 06:13:08 crc kubenswrapper[4832]: I1204 06:13:08.457493 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ac1d7327f2e78e0f66f4456ce86bb2d97eba8de7f290822abdcf9ca4cfd60ac1"} Dec 04 06:13:08 crc kubenswrapper[4832]: I1204 06:13:08.460087 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 04 06:13:08 crc kubenswrapper[4832]: I1204 06:13:08.460152 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8464a0c68f01a0d14d6eec377bb332fc7750e47cff2d7809b7d2d7a36ad0268d"} Dec 04 06:13:08 crc kubenswrapper[4832]: I1204 06:13:08.840215 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:13:09 crc kubenswrapper[4832]: I1204 06:13:09.467626 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"390f9cf88f45111baa9314070227828294cc605002b2fe0319c069644d3eb728"} Dec 04 06:13:09 crc kubenswrapper[4832]: I1204 06:13:09.467890 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b9f33b2b-3ebe-4107-96a0-40d7892a597d" Dec 04 06:13:09 crc kubenswrapper[4832]: I1204 06:13:09.467905 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b9f33b2b-3ebe-4107-96a0-40d7892a597d" Dec 04 06:13:11 crc kubenswrapper[4832]: I1204 06:13:11.734283 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:13:11 crc kubenswrapper[4832]: I1204 06:13:11.734654 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:13:11 crc kubenswrapper[4832]: I1204 06:13:11.740047 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:13:14 crc kubenswrapper[4832]: I1204 06:13:14.478353 4832 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:13:14 crc kubenswrapper[4832]: I1204 06:13:14.722633 4832 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="37569e09-9e77-4e53-83af-8fe6d44a74ff" Dec 04 06:13:15 crc kubenswrapper[4832]: I1204 06:13:15.047486 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:13:15 crc kubenswrapper[4832]: I1204 06:13:15.050709 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:13:15 crc kubenswrapper[4832]: I1204 06:13:15.495547 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b9f33b2b-3ebe-4107-96a0-40d7892a597d" Dec 04 06:13:15 crc kubenswrapper[4832]: I1204 06:13:15.495910 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b9f33b2b-3ebe-4107-96a0-40d7892a597d" Dec 04 06:13:15 crc kubenswrapper[4832]: I1204 06:13:15.498835 4832 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="37569e09-9e77-4e53-83af-8fe6d44a74ff" Dec 04 06:13:15 crc kubenswrapper[4832]: I1204 06:13:15.499524 4832 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://ac1d7327f2e78e0f66f4456ce86bb2d97eba8de7f290822abdcf9ca4cfd60ac1" Dec 04 06:13:15 crc kubenswrapper[4832]: I1204 06:13:15.499560 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:13:16 crc kubenswrapper[4832]: I1204 06:13:16.499717 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:13:16 crc kubenswrapper[4832]: I1204 06:13:16.499818 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b9f33b2b-3ebe-4107-96a0-40d7892a597d" Dec 04 06:13:16 crc kubenswrapper[4832]: I1204 06:13:16.499836 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b9f33b2b-3ebe-4107-96a0-40d7892a597d" Dec 04 06:13:16 crc kubenswrapper[4832]: I1204 06:13:16.502557 4832 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="37569e09-9e77-4e53-83af-8fe6d44a74ff" Dec 04 06:13:17 crc kubenswrapper[4832]: I1204 06:13:17.504060 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b9f33b2b-3ebe-4107-96a0-40d7892a597d" Dec 04 06:13:17 crc kubenswrapper[4832]: I1204 06:13:17.504091 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b9f33b2b-3ebe-4107-96a0-40d7892a597d" Dec 04 06:13:17 crc kubenswrapper[4832]: I1204 06:13:17.507502 4832 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="37569e09-9e77-4e53-83af-8fe6d44a74ff" Dec 04 06:13:18 crc kubenswrapper[4832]: I1204 06:13:18.845826 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 06:13:23 crc kubenswrapper[4832]: I1204 06:13:23.355311 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 04 06:13:23 crc kubenswrapper[4832]: I1204 06:13:23.437445 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 04 06:13:23 crc kubenswrapper[4832]: I1204 06:13:23.950179 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 04 06:13:24 crc kubenswrapper[4832]: I1204 06:13:24.736542 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 04 06:13:25 crc kubenswrapper[4832]: I1204 06:13:25.463432 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 04 06:13:25 crc kubenswrapper[4832]: I1204 06:13:25.954259 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 04 06:13:25 crc kubenswrapper[4832]: I1204 06:13:25.959853 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 04 06:13:26 crc kubenswrapper[4832]: I1204 06:13:26.204705 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 04 06:13:26 crc kubenswrapper[4832]: I1204 06:13:26.277639 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 04 06:13:26 crc kubenswrapper[4832]: I1204 06:13:26.290562 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 04 06:13:26 crc kubenswrapper[4832]: I1204 06:13:26.331612 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 04 06:13:26 crc kubenswrapper[4832]: I1204 06:13:26.487969 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 04 06:13:26 crc kubenswrapper[4832]: I1204 06:13:26.494040 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 04 06:13:26 crc kubenswrapper[4832]: I1204 06:13:26.798542 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 04 06:13:26 crc kubenswrapper[4832]: I1204 06:13:26.812490 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 04 06:13:26 crc kubenswrapper[4832]: I1204 06:13:26.868224 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 04 06:13:26 crc kubenswrapper[4832]: I1204 06:13:26.966549 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.024150 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.031163 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.108664 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.160128 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.301717 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.308977 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.412707 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.413737 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.424916 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.525169 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.575812 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.683450 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.697017 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.723757 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.738659 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.871585 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.872239 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.924497 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 04 06:13:27 crc kubenswrapper[4832]: I1204 06:13:27.949326 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.026313 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.028259 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.073516 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.081869 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.093127 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.241539 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.265048 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.352964 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.365857 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.400627 4832 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.439509 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.569993 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.615139 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.666972 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.726588 4832 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.751563 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.792099 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.835121 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 04 06:13:28 crc kubenswrapper[4832]: I1204 06:13:28.928385 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.009437 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.079854 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.084422 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.091095 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.115656 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.158196 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.186336 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.225021 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.273230 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.429033 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.530848 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.719563 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.737159 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.745988 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.773961 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.850243 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.868335 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 04 06:13:29 crc kubenswrapper[4832]: I1204 06:13:29.897227 4832 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.009710 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.042628 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.142224 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.173505 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.181322 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.303718 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.343364 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.345545 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.356050 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.407962 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.447815 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.453920 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.496777 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.524762 4832 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.558200 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.619070 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.624939 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.627683 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.677340 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.678747 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.789286 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.796479 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.824856 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.938416 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.940068 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 04 06:13:30 crc kubenswrapper[4832]: I1204 06:13:30.957094 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.009362 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.117434 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.127607 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.142731 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.206534 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.216993 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.231577 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.240835 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.283117 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.314265 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.356927 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.419320 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.475341 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.509167 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.585432 4832 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.589573 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.589619 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.589640 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.593904 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.626355 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.632452 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.632429567 podStartE2EDuration="17.632429567s" podCreationTimestamp="2025-12-04 06:13:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:13:31.610428777 +0000 UTC m=+267.223246503" watchObservedRunningTime="2025-12-04 06:13:31.632429567 +0000 UTC m=+267.245247293" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.774301 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.795274 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.823445 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.854770 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.903624 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.944171 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.975130 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 04 06:13:31 crc kubenswrapper[4832]: I1204 06:13:31.976523 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.018217 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.050138 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.086504 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.219041 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.242670 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.284243 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.310608 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.462086 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.534765 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.558445 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.573849 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.605900 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.643546 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.655164 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.661428 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.685607 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.744081 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.877730 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.920044 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.945035 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.951418 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 04 06:13:32 crc kubenswrapper[4832]: I1204 06:13:32.979540 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.021110 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.069961 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.123315 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.141770 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.157572 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.198264 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.253676 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.302319 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.307942 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.333183 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.344156 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.362336 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.382503 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.458347 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.617161 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.617630 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.627849 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.726541 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.929013 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 04 06:13:33 crc kubenswrapper[4832]: I1204 06:13:33.966998 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.081774 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.173038 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.228789 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.347721 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.351462 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.470825 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.509285 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.518910 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.524230 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.654529 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.749697 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.783468 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.898697 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 04 06:13:34 crc kubenswrapper[4832]: I1204 06:13:34.936899 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.031802 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.077687 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.214192 4832 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.215716 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.272784 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.354077 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.373810 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.386409 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.453324 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.538688 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.573256 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.583709 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.680921 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.753987 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.860313 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.896374 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 04 06:13:35 crc kubenswrapper[4832]: I1204 06:13:35.982835 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.041664 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.087225 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.089380 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.161594 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.176688 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.305128 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.380211 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.410942 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.544903 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.593205 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.623675 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.753007 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.759334 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.812183 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.824700 4832 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.824982 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3e28dafd6041538a60b1bed7a4fa3c6d76536ea6667365d9a70d88ec5e374ad3" gracePeriod=5 Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.873813 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 04 06:13:36 crc kubenswrapper[4832]: I1204 06:13:36.877052 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 04 06:13:37 crc kubenswrapper[4832]: I1204 06:13:37.000886 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 04 06:13:37 crc kubenswrapper[4832]: I1204 06:13:37.121615 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 04 06:13:37 crc kubenswrapper[4832]: I1204 06:13:37.242864 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 04 06:13:37 crc kubenswrapper[4832]: I1204 06:13:37.263063 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 04 06:13:37 crc kubenswrapper[4832]: I1204 06:13:37.557749 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 04 06:13:37 crc kubenswrapper[4832]: I1204 06:13:37.585560 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 04 06:13:37 crc kubenswrapper[4832]: I1204 06:13:37.605557 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 04 06:13:37 crc kubenswrapper[4832]: I1204 06:13:37.618667 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 04 06:13:37 crc kubenswrapper[4832]: I1204 06:13:37.878729 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 04 06:13:37 crc kubenswrapper[4832]: I1204 06:13:37.927038 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.077078 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.286557 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.297182 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.306222 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.381633 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.382864 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.500086 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.507666 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.620025 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.656178 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.737709 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.821520 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.929530 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.955939 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 04 06:13:38 crc kubenswrapper[4832]: I1204 06:13:38.991974 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 04 06:13:39 crc kubenswrapper[4832]: I1204 06:13:39.030156 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 04 06:13:39 crc kubenswrapper[4832]: I1204 06:13:39.037226 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 04 06:13:39 crc kubenswrapper[4832]: I1204 06:13:39.101177 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 04 06:13:39 crc kubenswrapper[4832]: I1204 06:13:39.107014 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 04 06:13:39 crc kubenswrapper[4832]: I1204 06:13:39.253263 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 04 06:13:39 crc kubenswrapper[4832]: I1204 06:13:39.300063 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 04 06:13:39 crc kubenswrapper[4832]: I1204 06:13:39.306801 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 04 06:13:39 crc kubenswrapper[4832]: I1204 06:13:39.337720 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 04 06:13:39 crc kubenswrapper[4832]: I1204 06:13:39.505775 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 04 06:13:39 crc kubenswrapper[4832]: I1204 06:13:39.526607 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 04 06:13:40 crc kubenswrapper[4832]: I1204 06:13:40.197068 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 04 06:13:40 crc kubenswrapper[4832]: I1204 06:13:40.465325 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 04 06:13:41 crc kubenswrapper[4832]: I1204 06:13:41.565379 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 04 06:13:41 crc kubenswrapper[4832]: I1204 06:13:41.956277 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mt7dw"] Dec 04 06:13:41 crc kubenswrapper[4832]: I1204 06:13:41.956523 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mt7dw" podUID="3eb4072b-8c81-4808-b3a6-9be9fc814060" containerName="registry-server" containerID="cri-o://662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca" gracePeriod=30 Dec 04 06:13:41 crc kubenswrapper[4832]: I1204 06:13:41.974550 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qzdgh"] Dec 04 06:13:41 crc kubenswrapper[4832]: I1204 06:13:41.974890 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qzdgh" podUID="f0f3ccce-259a-43f4-883e-a8f278c34053" containerName="registry-server" containerID="cri-o://99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea" gracePeriod=30 Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.000477 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pqqsl"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.000898 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" podUID="79d8eb21-a98b-45c5-9406-8e5d64e59fa0" containerName="marketplace-operator" containerID="cri-o://7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458" gracePeriod=30 Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.012831 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bs4tj"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.013157 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bs4tj" podUID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" containerName="registry-server" containerID="cri-o://d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac" gracePeriod=30 Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.029806 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j295k"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.030083 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j295k" podUID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" containerName="registry-server" containerID="cri-o://a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f" gracePeriod=30 Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.034221 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q8xv8"] Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.034415 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.034431 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.034454 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" containerName="installer" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.034461 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" containerName="installer" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.034654 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.034675 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b62cbc-fdc4-4d4b-85eb-2d5a2289bb3d" containerName="installer" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.035086 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.041159 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q8xv8"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.184996 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5e811d7-d4fd-4504-b6d0-8d653628465d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q8xv8\" (UID: \"d5e811d7-d4fd-4504-b6d0-8d653628465d\") " pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.185050 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5e811d7-d4fd-4504-b6d0-8d653628465d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q8xv8\" (UID: \"d5e811d7-d4fd-4504-b6d0-8d653628465d\") " pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.185083 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp654\" (UniqueName: \"kubernetes.io/projected/d5e811d7-d4fd-4504-b6d0-8d653628465d-kube-api-access-qp654\") pod \"marketplace-operator-79b997595-q8xv8\" (UID: \"d5e811d7-d4fd-4504-b6d0-8d653628465d\") " pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.285965 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp654\" (UniqueName: \"kubernetes.io/projected/d5e811d7-d4fd-4504-b6d0-8d653628465d-kube-api-access-qp654\") pod \"marketplace-operator-79b997595-q8xv8\" (UID: \"d5e811d7-d4fd-4504-b6d0-8d653628465d\") " pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.286355 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5e811d7-d4fd-4504-b6d0-8d653628465d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q8xv8\" (UID: \"d5e811d7-d4fd-4504-b6d0-8d653628465d\") " pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.286385 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5e811d7-d4fd-4504-b6d0-8d653628465d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q8xv8\" (UID: \"d5e811d7-d4fd-4504-b6d0-8d653628465d\") " pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.288066 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5e811d7-d4fd-4504-b6d0-8d653628465d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q8xv8\" (UID: \"d5e811d7-d4fd-4504-b6d0-8d653628465d\") " pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.292532 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5e811d7-d4fd-4504-b6d0-8d653628465d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q8xv8\" (UID: \"d5e811d7-d4fd-4504-b6d0-8d653628465d\") " pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.303487 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.303949 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp654\" (UniqueName: \"kubernetes.io/projected/d5e811d7-d4fd-4504-b6d0-8d653628465d-kube-api-access-qp654\") pod \"marketplace-operator-79b997595-q8xv8\" (UID: \"d5e811d7-d4fd-4504-b6d0-8d653628465d\") " pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.370062 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.370168 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.386792 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb4072b-8c81-4808-b3a6-9be9fc814060-utilities\") pod \"3eb4072b-8c81-4808-b3a6-9be9fc814060\" (UID: \"3eb4072b-8c81-4808-b3a6-9be9fc814060\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.386860 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjm7q\" (UniqueName: \"kubernetes.io/projected/3eb4072b-8c81-4808-b3a6-9be9fc814060-kube-api-access-gjm7q\") pod \"3eb4072b-8c81-4808-b3a6-9be9fc814060\" (UID: \"3eb4072b-8c81-4808-b3a6-9be9fc814060\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.386941 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb4072b-8c81-4808-b3a6-9be9fc814060-catalog-content\") pod \"3eb4072b-8c81-4808-b3a6-9be9fc814060\" (UID: \"3eb4072b-8c81-4808-b3a6-9be9fc814060\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.387800 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb4072b-8c81-4808-b3a6-9be9fc814060-utilities" (OuterVolumeSpecName: "utilities") pod "3eb4072b-8c81-4808-b3a6-9be9fc814060" (UID: "3eb4072b-8c81-4808-b3a6-9be9fc814060"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.394780 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb4072b-8c81-4808-b3a6-9be9fc814060-kube-api-access-gjm7q" (OuterVolumeSpecName: "kube-api-access-gjm7q") pod "3eb4072b-8c81-4808-b3a6-9be9fc814060" (UID: "3eb4072b-8c81-4808-b3a6-9be9fc814060"). InnerVolumeSpecName "kube-api-access-gjm7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.441891 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.453289 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.459552 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb4072b-8c81-4808-b3a6-9be9fc814060-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3eb4072b-8c81-4808-b3a6-9be9fc814060" (UID: "3eb4072b-8c81-4808-b3a6-9be9fc814060"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.476587 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.481713 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.487846 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.487885 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.487910 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.487928 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.487987 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.488028 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.488045 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.488072 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.488339 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb4072b-8c81-4808-b3a6-9be9fc814060-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.488358 4832 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.488370 4832 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.488381 4832 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.488413 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb4072b-8c81-4808-b3a6-9be9fc814060-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.488426 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjm7q\" (UniqueName: \"kubernetes.io/projected/3eb4072b-8c81-4808-b3a6-9be9fc814060-kube-api-access-gjm7q\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.488481 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.498275 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589065 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-utilities\") pod \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\" (UID: \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589113 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-catalog-content\") pod \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\" (UID: \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589150 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnw28\" (UniqueName: \"kubernetes.io/projected/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-kube-api-access-vnw28\") pod \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\" (UID: \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589179 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn5m7\" (UniqueName: \"kubernetes.io/projected/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-kube-api-access-pn5m7\") pod \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\" (UID: \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589233 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f3ccce-259a-43f4-883e-a8f278c34053-catalog-content\") pod \"f0f3ccce-259a-43f4-883e-a8f278c34053\" (UID: \"f0f3ccce-259a-43f4-883e-a8f278c34053\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589251 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-utilities\") pod \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\" (UID: \"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589267 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-marketplace-trusted-ca\") pod \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\" (UID: \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589351 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnt66\" (UniqueName: \"kubernetes.io/projected/f0f3ccce-259a-43f4-883e-a8f278c34053-kube-api-access-xnt66\") pod \"f0f3ccce-259a-43f4-883e-a8f278c34053\" (UID: \"f0f3ccce-259a-43f4-883e-a8f278c34053\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589370 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f3ccce-259a-43f4-883e-a8f278c34053-utilities\") pod \"f0f3ccce-259a-43f4-883e-a8f278c34053\" (UID: \"f0f3ccce-259a-43f4-883e-a8f278c34053\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589407 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-marketplace-operator-metrics\") pod \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\" (UID: \"79d8eb21-a98b-45c5-9406-8e5d64e59fa0\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589447 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-catalog-content\") pod \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\" (UID: \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589478 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqjwf\" (UniqueName: \"kubernetes.io/projected/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-kube-api-access-dqjwf\") pod \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\" (UID: \"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930\") " Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589759 4832 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.589797 4832 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.590241 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-utilities" (OuterVolumeSpecName: "utilities") pod "8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" (UID: "8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.590275 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "79d8eb21-a98b-45c5-9406-8e5d64e59fa0" (UID: "79d8eb21-a98b-45c5-9406-8e5d64e59fa0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.590309 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f3ccce-259a-43f4-883e-a8f278c34053-utilities" (OuterVolumeSpecName: "utilities") pod "f0f3ccce-259a-43f4-883e-a8f278c34053" (UID: "f0f3ccce-259a-43f4-883e-a8f278c34053"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.590423 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-utilities" (OuterVolumeSpecName: "utilities") pod "11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" (UID: "11f4fe16-d42c-4aaf-9b33-4ab8f93e2930"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.592623 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-kube-api-access-vnw28" (OuterVolumeSpecName: "kube-api-access-vnw28") pod "8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" (UID: "8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e"). InnerVolumeSpecName "kube-api-access-vnw28". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.592793 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-kube-api-access-dqjwf" (OuterVolumeSpecName: "kube-api-access-dqjwf") pod "11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" (UID: "11f4fe16-d42c-4aaf-9b33-4ab8f93e2930"). InnerVolumeSpecName "kube-api-access-dqjwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.593221 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-kube-api-access-pn5m7" (OuterVolumeSpecName: "kube-api-access-pn5m7") pod "79d8eb21-a98b-45c5-9406-8e5d64e59fa0" (UID: "79d8eb21-a98b-45c5-9406-8e5d64e59fa0"). InnerVolumeSpecName "kube-api-access-pn5m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.593501 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f3ccce-259a-43f4-883e-a8f278c34053-kube-api-access-xnt66" (OuterVolumeSpecName: "kube-api-access-xnt66") pod "f0f3ccce-259a-43f4-883e-a8f278c34053" (UID: "f0f3ccce-259a-43f4-883e-a8f278c34053"). InnerVolumeSpecName "kube-api-access-xnt66". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.593656 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "79d8eb21-a98b-45c5-9406-8e5d64e59fa0" (UID: "79d8eb21-a98b-45c5-9406-8e5d64e59fa0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.601987 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.611537 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" (UID: "8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.641722 4832 generic.go:334] "Generic (PLEG): container finished" podID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" containerID="d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac" exitCode=0 Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.641783 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bs4tj" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.641811 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bs4tj" event={"ID":"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e","Type":"ContainerDied","Data":"d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac"} Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.641848 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bs4tj" event={"ID":"8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e","Type":"ContainerDied","Data":"1783ccfb311ca9d4caa2a9b99d869056688f4227aceff0c703de5904092ee4fb"} Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.641866 4832 scope.go:117] "RemoveContainer" containerID="d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.648233 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f3ccce-259a-43f4-883e-a8f278c34053-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0f3ccce-259a-43f4-883e-a8f278c34053" (UID: "f0f3ccce-259a-43f4-883e-a8f278c34053"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.650565 4832 generic.go:334] "Generic (PLEG): container finished" podID="3eb4072b-8c81-4808-b3a6-9be9fc814060" containerID="662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca" exitCode=0 Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.650678 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mt7dw" event={"ID":"3eb4072b-8c81-4808-b3a6-9be9fc814060","Type":"ContainerDied","Data":"662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca"} Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.650730 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mt7dw" event={"ID":"3eb4072b-8c81-4808-b3a6-9be9fc814060","Type":"ContainerDied","Data":"791c98be740fc475ba83b61122ccda8a63de3a0f38990091b64a717b3d4c260f"} Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.650756 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mt7dw" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.654114 4832 generic.go:334] "Generic (PLEG): container finished" podID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" containerID="a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f" exitCode=0 Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.654276 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j295k" event={"ID":"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930","Type":"ContainerDied","Data":"a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f"} Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.654309 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j295k" event={"ID":"11f4fe16-d42c-4aaf-9b33-4ab8f93e2930","Type":"ContainerDied","Data":"1afdd0de8e64a7a080681049c21ee7f5271c3aa947d717f14ddff42de60762f7"} Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.654375 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j295k" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.660034 4832 generic.go:334] "Generic (PLEG): container finished" podID="79d8eb21-a98b-45c5-9406-8e5d64e59fa0" containerID="7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458" exitCode=0 Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.660083 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.660112 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" event={"ID":"79d8eb21-a98b-45c5-9406-8e5d64e59fa0","Type":"ContainerDied","Data":"7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458"} Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.660138 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pqqsl" event={"ID":"79d8eb21-a98b-45c5-9406-8e5d64e59fa0","Type":"ContainerDied","Data":"d207888637ba149ce10589bcee220ff9ce0ef6d57d3b8be7b01880c2a45f8529"} Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.662677 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.662722 4832 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3e28dafd6041538a60b1bed7a4fa3c6d76536ea6667365d9a70d88ec5e374ad3" exitCode=137 Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.662819 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.668312 4832 generic.go:334] "Generic (PLEG): container finished" podID="f0f3ccce-259a-43f4-883e-a8f278c34053" containerID="99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea" exitCode=0 Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.668348 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzdgh" event={"ID":"f0f3ccce-259a-43f4-883e-a8f278c34053","Type":"ContainerDied","Data":"99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea"} Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.668375 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzdgh" event={"ID":"f0f3ccce-259a-43f4-883e-a8f278c34053","Type":"ContainerDied","Data":"77964a5a79e5cca74f6db526e54045737a0396fd26e5a1ab78a2091e1c3d0654"} Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.668444 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qzdgh" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.673799 4832 scope.go:117] "RemoveContainer" containerID="999a7b14c95318cc644b8a3b50f50c12b14a43956aefa407674dd1505bb07ba6" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.681096 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bs4tj"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.692087 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bs4tj"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.695136 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnt66\" (UniqueName: \"kubernetes.io/projected/f0f3ccce-259a-43f4-883e-a8f278c34053-kube-api-access-xnt66\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.695165 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.695178 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f3ccce-259a-43f4-883e-a8f278c34053-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.695190 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqjwf\" (UniqueName: \"kubernetes.io/projected/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-kube-api-access-dqjwf\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.695203 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.695215 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.695229 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnw28\" (UniqueName: \"kubernetes.io/projected/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-kube-api-access-vnw28\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.695243 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn5m7\" (UniqueName: \"kubernetes.io/projected/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-kube-api-access-pn5m7\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.695254 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f3ccce-259a-43f4-883e-a8f278c34053-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.695265 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.695276 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d8eb21-a98b-45c5-9406-8e5d64e59fa0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.723667 4832 scope.go:117] "RemoveContainer" containerID="1fb75ed2dedc13f1ede04caca3d2475e9a0802ab60d6abba82f1b7778efc29ff" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.728258 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" path="/var/lib/kubelet/pods/8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e/volumes" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.729873 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.733866 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mt7dw"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.733899 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mt7dw"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.754882 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qzdgh"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.762262 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qzdgh"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.763577 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" (UID: "11f4fe16-d42c-4aaf-9b33-4ab8f93e2930"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.763861 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pqqsl"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.764826 4832 scope.go:117] "RemoveContainer" containerID="d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.765222 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac\": container with ID starting with d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac not found: ID does not exist" containerID="d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.765253 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac"} err="failed to get container status \"d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac\": rpc error: code = NotFound desc = could not find container \"d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac\": container with ID starting with d614b86f1ed2e3e90ea32ef59db4c1d270e189b0c4b405b5844ffe53d20204ac not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.765276 4832 scope.go:117] "RemoveContainer" containerID="999a7b14c95318cc644b8a3b50f50c12b14a43956aefa407674dd1505bb07ba6" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.765476 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999a7b14c95318cc644b8a3b50f50c12b14a43956aefa407674dd1505bb07ba6\": container with ID starting with 999a7b14c95318cc644b8a3b50f50c12b14a43956aefa407674dd1505bb07ba6 not found: ID does not exist" containerID="999a7b14c95318cc644b8a3b50f50c12b14a43956aefa407674dd1505bb07ba6" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.765564 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999a7b14c95318cc644b8a3b50f50c12b14a43956aefa407674dd1505bb07ba6"} err="failed to get container status \"999a7b14c95318cc644b8a3b50f50c12b14a43956aefa407674dd1505bb07ba6\": rpc error: code = NotFound desc = could not find container \"999a7b14c95318cc644b8a3b50f50c12b14a43956aefa407674dd1505bb07ba6\": container with ID starting with 999a7b14c95318cc644b8a3b50f50c12b14a43956aefa407674dd1505bb07ba6 not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.765581 4832 scope.go:117] "RemoveContainer" containerID="1fb75ed2dedc13f1ede04caca3d2475e9a0802ab60d6abba82f1b7778efc29ff" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.765790 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fb75ed2dedc13f1ede04caca3d2475e9a0802ab60d6abba82f1b7778efc29ff\": container with ID starting with 1fb75ed2dedc13f1ede04caca3d2475e9a0802ab60d6abba82f1b7778efc29ff not found: ID does not exist" containerID="1fb75ed2dedc13f1ede04caca3d2475e9a0802ab60d6abba82f1b7778efc29ff" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.765809 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb75ed2dedc13f1ede04caca3d2475e9a0802ab60d6abba82f1b7778efc29ff"} err="failed to get container status \"1fb75ed2dedc13f1ede04caca3d2475e9a0802ab60d6abba82f1b7778efc29ff\": rpc error: code = NotFound desc = could not find container \"1fb75ed2dedc13f1ede04caca3d2475e9a0802ab60d6abba82f1b7778efc29ff\": container with ID starting with 1fb75ed2dedc13f1ede04caca3d2475e9a0802ab60d6abba82f1b7778efc29ff not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.765822 4832 scope.go:117] "RemoveContainer" containerID="662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.767221 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pqqsl"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.778710 4832 scope.go:117] "RemoveContainer" containerID="ced1e491e6f3d3014b4e4d1f30a4f7f20ee24ae1f11df3e4e909306c4b4dd9e2" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.792563 4832 scope.go:117] "RemoveContainer" containerID="a985c698ec807942cd63f9ad68cb4c8aad093f11b630b623ac8b6ecc08486912" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.796365 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.814256 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q8xv8"] Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.814605 4832 scope.go:117] "RemoveContainer" containerID="662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.815958 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca\": container with ID starting with 662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca not found: ID does not exist" containerID="662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.815989 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca"} err="failed to get container status \"662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca\": rpc error: code = NotFound desc = could not find container \"662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca\": container with ID starting with 662449c47065f5c8f4fdb8d844c47cd6b8c008154ebb19f7892f3d64fe12b1ca not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.816018 4832 scope.go:117] "RemoveContainer" containerID="ced1e491e6f3d3014b4e4d1f30a4f7f20ee24ae1f11df3e4e909306c4b4dd9e2" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.816414 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced1e491e6f3d3014b4e4d1f30a4f7f20ee24ae1f11df3e4e909306c4b4dd9e2\": container with ID starting with ced1e491e6f3d3014b4e4d1f30a4f7f20ee24ae1f11df3e4e909306c4b4dd9e2 not found: ID does not exist" containerID="ced1e491e6f3d3014b4e4d1f30a4f7f20ee24ae1f11df3e4e909306c4b4dd9e2" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.816451 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced1e491e6f3d3014b4e4d1f30a4f7f20ee24ae1f11df3e4e909306c4b4dd9e2"} err="failed to get container status \"ced1e491e6f3d3014b4e4d1f30a4f7f20ee24ae1f11df3e4e909306c4b4dd9e2\": rpc error: code = NotFound desc = could not find container \"ced1e491e6f3d3014b4e4d1f30a4f7f20ee24ae1f11df3e4e909306c4b4dd9e2\": container with ID starting with ced1e491e6f3d3014b4e4d1f30a4f7f20ee24ae1f11df3e4e909306c4b4dd9e2 not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.816483 4832 scope.go:117] "RemoveContainer" containerID="a985c698ec807942cd63f9ad68cb4c8aad093f11b630b623ac8b6ecc08486912" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.816834 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a985c698ec807942cd63f9ad68cb4c8aad093f11b630b623ac8b6ecc08486912\": container with ID starting with a985c698ec807942cd63f9ad68cb4c8aad093f11b630b623ac8b6ecc08486912 not found: ID does not exist" containerID="a985c698ec807942cd63f9ad68cb4c8aad093f11b630b623ac8b6ecc08486912" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.816858 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a985c698ec807942cd63f9ad68cb4c8aad093f11b630b623ac8b6ecc08486912"} err="failed to get container status \"a985c698ec807942cd63f9ad68cb4c8aad093f11b630b623ac8b6ecc08486912\": rpc error: code = NotFound desc = could not find container \"a985c698ec807942cd63f9ad68cb4c8aad093f11b630b623ac8b6ecc08486912\": container with ID starting with a985c698ec807942cd63f9ad68cb4c8aad093f11b630b623ac8b6ecc08486912 not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.816872 4832 scope.go:117] "RemoveContainer" containerID="a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.829779 4832 scope.go:117] "RemoveContainer" containerID="ceaf711f79b321c7c6e76716fb8a26de338ce6561720e7cf5cf55d3b04dbe0be" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.856914 4832 scope.go:117] "RemoveContainer" containerID="c30bcba510b0b4dcac4e8073d3bf69d15169021fa231008f5530a1e190c4647e" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.906729 4832 scope.go:117] "RemoveContainer" containerID="a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.907280 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f\": container with ID starting with a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f not found: ID does not exist" containerID="a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.907313 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f"} err="failed to get container status \"a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f\": rpc error: code = NotFound desc = could not find container \"a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f\": container with ID starting with a370953a8d9545a7c0d0d6a48b1ddb2bef16b3d20a263961a480773871a0087f not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.907337 4832 scope.go:117] "RemoveContainer" containerID="ceaf711f79b321c7c6e76716fb8a26de338ce6561720e7cf5cf55d3b04dbe0be" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.907699 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceaf711f79b321c7c6e76716fb8a26de338ce6561720e7cf5cf55d3b04dbe0be\": container with ID starting with ceaf711f79b321c7c6e76716fb8a26de338ce6561720e7cf5cf55d3b04dbe0be not found: ID does not exist" containerID="ceaf711f79b321c7c6e76716fb8a26de338ce6561720e7cf5cf55d3b04dbe0be" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.907758 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaf711f79b321c7c6e76716fb8a26de338ce6561720e7cf5cf55d3b04dbe0be"} err="failed to get container status \"ceaf711f79b321c7c6e76716fb8a26de338ce6561720e7cf5cf55d3b04dbe0be\": rpc error: code = NotFound desc = could not find container \"ceaf711f79b321c7c6e76716fb8a26de338ce6561720e7cf5cf55d3b04dbe0be\": container with ID starting with ceaf711f79b321c7c6e76716fb8a26de338ce6561720e7cf5cf55d3b04dbe0be not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.907791 4832 scope.go:117] "RemoveContainer" containerID="c30bcba510b0b4dcac4e8073d3bf69d15169021fa231008f5530a1e190c4647e" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.908080 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c30bcba510b0b4dcac4e8073d3bf69d15169021fa231008f5530a1e190c4647e\": container with ID starting with c30bcba510b0b4dcac4e8073d3bf69d15169021fa231008f5530a1e190c4647e not found: ID does not exist" containerID="c30bcba510b0b4dcac4e8073d3bf69d15169021fa231008f5530a1e190c4647e" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.908114 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c30bcba510b0b4dcac4e8073d3bf69d15169021fa231008f5530a1e190c4647e"} err="failed to get container status \"c30bcba510b0b4dcac4e8073d3bf69d15169021fa231008f5530a1e190c4647e\": rpc error: code = NotFound desc = could not find container \"c30bcba510b0b4dcac4e8073d3bf69d15169021fa231008f5530a1e190c4647e\": container with ID starting with c30bcba510b0b4dcac4e8073d3bf69d15169021fa231008f5530a1e190c4647e not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.908135 4832 scope.go:117] "RemoveContainer" containerID="7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.922369 4832 scope.go:117] "RemoveContainer" containerID="7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.922924 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458\": container with ID starting with 7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458 not found: ID does not exist" containerID="7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.922957 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458"} err="failed to get container status \"7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458\": rpc error: code = NotFound desc = could not find container \"7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458\": container with ID starting with 7e86abdd8fc12d41668e53786287e31c1f8e9f58e50a83f17edb02ae30155458 not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.922983 4832 scope.go:117] "RemoveContainer" containerID="3e28dafd6041538a60b1bed7a4fa3c6d76536ea6667365d9a70d88ec5e374ad3" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.936111 4832 scope.go:117] "RemoveContainer" containerID="3e28dafd6041538a60b1bed7a4fa3c6d76536ea6667365d9a70d88ec5e374ad3" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.936596 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e28dafd6041538a60b1bed7a4fa3c6d76536ea6667365d9a70d88ec5e374ad3\": container with ID starting with 3e28dafd6041538a60b1bed7a4fa3c6d76536ea6667365d9a70d88ec5e374ad3 not found: ID does not exist" containerID="3e28dafd6041538a60b1bed7a4fa3c6d76536ea6667365d9a70d88ec5e374ad3" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.936638 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e28dafd6041538a60b1bed7a4fa3c6d76536ea6667365d9a70d88ec5e374ad3"} err="failed to get container status \"3e28dafd6041538a60b1bed7a4fa3c6d76536ea6667365d9a70d88ec5e374ad3\": rpc error: code = NotFound desc = could not find container \"3e28dafd6041538a60b1bed7a4fa3c6d76536ea6667365d9a70d88ec5e374ad3\": container with ID starting with 3e28dafd6041538a60b1bed7a4fa3c6d76536ea6667365d9a70d88ec5e374ad3 not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.936670 4832 scope.go:117] "RemoveContainer" containerID="99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.949268 4832 scope.go:117] "RemoveContainer" containerID="9dc81fafe9f98df71e553cea5dbe8edec8ac54275ba7665b3fd9e9037639ffb0" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.965602 4832 scope.go:117] "RemoveContainer" containerID="ee516929c1df112f8fa669da539baa8962af6c16ff580f44183471dc5a7db2d4" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.984448 4832 scope.go:117] "RemoveContainer" containerID="99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.993239 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea\": container with ID starting with 99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea not found: ID does not exist" containerID="99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.993301 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea"} err="failed to get container status \"99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea\": rpc error: code = NotFound desc = could not find container \"99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea\": container with ID starting with 99e6c1a4cfe657a54a1630ab345ecf080f32c3a89ddd127267e43a175df903ea not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.993368 4832 scope.go:117] "RemoveContainer" containerID="9dc81fafe9f98df71e553cea5dbe8edec8ac54275ba7665b3fd9e9037639ffb0" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.996152 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc81fafe9f98df71e553cea5dbe8edec8ac54275ba7665b3fd9e9037639ffb0\": container with ID starting with 9dc81fafe9f98df71e553cea5dbe8edec8ac54275ba7665b3fd9e9037639ffb0 not found: ID does not exist" containerID="9dc81fafe9f98df71e553cea5dbe8edec8ac54275ba7665b3fd9e9037639ffb0" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.996247 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc81fafe9f98df71e553cea5dbe8edec8ac54275ba7665b3fd9e9037639ffb0"} err="failed to get container status \"9dc81fafe9f98df71e553cea5dbe8edec8ac54275ba7665b3fd9e9037639ffb0\": rpc error: code = NotFound desc = could not find container \"9dc81fafe9f98df71e553cea5dbe8edec8ac54275ba7665b3fd9e9037639ffb0\": container with ID starting with 9dc81fafe9f98df71e553cea5dbe8edec8ac54275ba7665b3fd9e9037639ffb0 not found: ID does not exist" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.996272 4832 scope.go:117] "RemoveContainer" containerID="ee516929c1df112f8fa669da539baa8962af6c16ff580f44183471dc5a7db2d4" Dec 04 06:13:42 crc kubenswrapper[4832]: E1204 06:13:42.996891 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee516929c1df112f8fa669da539baa8962af6c16ff580f44183471dc5a7db2d4\": container with ID starting with ee516929c1df112f8fa669da539baa8962af6c16ff580f44183471dc5a7db2d4 not found: ID does not exist" containerID="ee516929c1df112f8fa669da539baa8962af6c16ff580f44183471dc5a7db2d4" Dec 04 06:13:42 crc kubenswrapper[4832]: I1204 06:13:42.996971 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee516929c1df112f8fa669da539baa8962af6c16ff580f44183471dc5a7db2d4"} err="failed to get container status \"ee516929c1df112f8fa669da539baa8962af6c16ff580f44183471dc5a7db2d4\": rpc error: code = NotFound desc = could not find container \"ee516929c1df112f8fa669da539baa8962af6c16ff580f44183471dc5a7db2d4\": container with ID starting with ee516929c1df112f8fa669da539baa8962af6c16ff580f44183471dc5a7db2d4 not found: ID does not exist" Dec 04 06:13:43 crc kubenswrapper[4832]: I1204 06:13:43.002929 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j295k"] Dec 04 06:13:43 crc kubenswrapper[4832]: I1204 06:13:43.005674 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j295k"] Dec 04 06:13:43 crc kubenswrapper[4832]: I1204 06:13:43.688487 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" event={"ID":"d5e811d7-d4fd-4504-b6d0-8d653628465d","Type":"ContainerStarted","Data":"924cad53e14b5277634447c64327eced2a53ed175d55b2cafca3fdea8efa891b"} Dec 04 06:13:43 crc kubenswrapper[4832]: I1204 06:13:43.689863 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:43 crc kubenswrapper[4832]: I1204 06:13:43.689955 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" event={"ID":"d5e811d7-d4fd-4504-b6d0-8d653628465d","Type":"ContainerStarted","Data":"90e6267e837c496d1547b3e9bb6330e9d6593096dc513f94dceae8262219c3be"} Dec 04 06:13:43 crc kubenswrapper[4832]: I1204 06:13:43.695650 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" Dec 04 06:13:43 crc kubenswrapper[4832]: I1204 06:13:43.710381 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-q8xv8" podStartSLOduration=2.710360358 podStartE2EDuration="2.710360358s" podCreationTimestamp="2025-12-04 06:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:13:43.706919847 +0000 UTC m=+279.319737553" watchObservedRunningTime="2025-12-04 06:13:43.710360358 +0000 UTC m=+279.323178064" Dec 04 06:13:44 crc kubenswrapper[4832]: I1204 06:13:44.719606 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" path="/var/lib/kubelet/pods/11f4fe16-d42c-4aaf-9b33-4ab8f93e2930/volumes" Dec 04 06:13:44 crc kubenswrapper[4832]: I1204 06:13:44.720411 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb4072b-8c81-4808-b3a6-9be9fc814060" path="/var/lib/kubelet/pods/3eb4072b-8c81-4808-b3a6-9be9fc814060/volumes" Dec 04 06:13:44 crc kubenswrapper[4832]: I1204 06:13:44.721150 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d8eb21-a98b-45c5-9406-8e5d64e59fa0" path="/var/lib/kubelet/pods/79d8eb21-a98b-45c5-9406-8e5d64e59fa0/volumes" Dec 04 06:13:44 crc kubenswrapper[4832]: I1204 06:13:44.722219 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f3ccce-259a-43f4-883e-a8f278c34053" path="/var/lib/kubelet/pods/f0f3ccce-259a-43f4-883e-a8f278c34053/volumes" Dec 04 06:13:58 crc kubenswrapper[4832]: I1204 06:13:58.403834 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.514298 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cpzbl"] Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.515103 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" podUID="3e212703-f85d-4128-bbff-a3057263d6d3" containerName="controller-manager" containerID="cri-o://e656a0fcac1abced1e4dbf2854fe2aaaac77aa942fa07ae743b6ea15adb484eb" gracePeriod=30 Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.640820 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj"] Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.641462 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" podUID="d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905" containerName="route-controller-manager" containerID="cri-o://7df3ce7feaee0ea78864bb905b85180b824ac11f9a7d96ab52c736b96cf5044a" gracePeriod=30 Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.876697 4832 generic.go:334] "Generic (PLEG): container finished" podID="3e212703-f85d-4128-bbff-a3057263d6d3" containerID="e656a0fcac1abced1e4dbf2854fe2aaaac77aa942fa07ae743b6ea15adb484eb" exitCode=0 Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.876805 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" event={"ID":"3e212703-f85d-4128-bbff-a3057263d6d3","Type":"ContainerDied","Data":"e656a0fcac1abced1e4dbf2854fe2aaaac77aa942fa07ae743b6ea15adb484eb"} Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.876877 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" event={"ID":"3e212703-f85d-4128-bbff-a3057263d6d3","Type":"ContainerDied","Data":"42f7799d8eb2b74ee28a7548dd6bf232eb0e19aa3e43b9d356765cebef5e258e"} Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.876902 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42f7799d8eb2b74ee28a7548dd6bf232eb0e19aa3e43b9d356765cebef5e258e" Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.879577 4832 generic.go:334] "Generic (PLEG): container finished" podID="d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905" containerID="7df3ce7feaee0ea78864bb905b85180b824ac11f9a7d96ab52c736b96cf5044a" exitCode=0 Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.879607 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" event={"ID":"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905","Type":"ContainerDied","Data":"7df3ce7feaee0ea78864bb905b85180b824ac11f9a7d96ab52c736b96cf5044a"} Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.882134 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.953449 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-proxy-ca-bundles\") pod \"3e212703-f85d-4128-bbff-a3057263d6d3\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.953511 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-config\") pod \"3e212703-f85d-4128-bbff-a3057263d6d3\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.953531 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e212703-f85d-4128-bbff-a3057263d6d3-serving-cert\") pod \"3e212703-f85d-4128-bbff-a3057263d6d3\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.953553 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7kwt\" (UniqueName: \"kubernetes.io/projected/3e212703-f85d-4128-bbff-a3057263d6d3-kube-api-access-r7kwt\") pod \"3e212703-f85d-4128-bbff-a3057263d6d3\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.953583 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-client-ca\") pod \"3e212703-f85d-4128-bbff-a3057263d6d3\" (UID: \"3e212703-f85d-4128-bbff-a3057263d6d3\") " Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.954521 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-client-ca" (OuterVolumeSpecName: "client-ca") pod "3e212703-f85d-4128-bbff-a3057263d6d3" (UID: "3e212703-f85d-4128-bbff-a3057263d6d3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.955468 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3e212703-f85d-4128-bbff-a3057263d6d3" (UID: "3e212703-f85d-4128-bbff-a3057263d6d3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.955789 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-config" (OuterVolumeSpecName: "config") pod "3e212703-f85d-4128-bbff-a3057263d6d3" (UID: "3e212703-f85d-4128-bbff-a3057263d6d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.959964 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e212703-f85d-4128-bbff-a3057263d6d3-kube-api-access-r7kwt" (OuterVolumeSpecName: "kube-api-access-r7kwt") pod "3e212703-f85d-4128-bbff-a3057263d6d3" (UID: "3e212703-f85d-4128-bbff-a3057263d6d3"). InnerVolumeSpecName "kube-api-access-r7kwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.959968 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e212703-f85d-4128-bbff-a3057263d6d3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3e212703-f85d-4128-bbff-a3057263d6d3" (UID: "3e212703-f85d-4128-bbff-a3057263d6d3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:14:17 crc kubenswrapper[4832]: I1204 06:14:17.965810 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.054833 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.055259 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.055276 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e212703-f85d-4128-bbff-a3057263d6d3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.055337 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e212703-f85d-4128-bbff-a3057263d6d3-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.055349 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7kwt\" (UniqueName: \"kubernetes.io/projected/3e212703-f85d-4128-bbff-a3057263d6d3-kube-api-access-r7kwt\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.156507 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-config\") pod \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.156593 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-client-ca\") pod \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.156654 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-serving-cert\") pod \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.156686 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-294pt\" (UniqueName: \"kubernetes.io/projected/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-kube-api-access-294pt\") pod \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\" (UID: \"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905\") " Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.157750 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-client-ca" (OuterVolumeSpecName: "client-ca") pod "d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905" (UID: "d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.157877 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-config" (OuterVolumeSpecName: "config") pod "d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905" (UID: "d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.160856 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905" (UID: "d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.160871 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-kube-api-access-294pt" (OuterVolumeSpecName: "kube-api-access-294pt") pod "d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905" (UID: "d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905"). InnerVolumeSpecName "kube-api-access-294pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.258038 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.258095 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.258111 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.258126 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-294pt\" (UniqueName: \"kubernetes.io/projected/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905-kube-api-access-294pt\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.886046 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cpzbl" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.886631 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.886893 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj" event={"ID":"d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905","Type":"ContainerDied","Data":"2f7fed5bb81ad6e0660df2f10b9b22a8a6f542b07f24ae6222afee0d5e6aed91"} Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.886952 4832 scope.go:117] "RemoveContainer" containerID="7df3ce7feaee0ea78864bb905b85180b824ac11f9a7d96ab52c736b96cf5044a" Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.921919 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj"] Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.924943 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lhfgj"] Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.934122 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cpzbl"] Dec 04 06:14:18 crc kubenswrapper[4832]: I1204 06:14:18.935840 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cpzbl"] Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706260 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77f667dfdd-l6dvk"] Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706591 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905" containerName="route-controller-manager" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706616 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905" containerName="route-controller-manager" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706674 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f3ccce-259a-43f4-883e-a8f278c34053" containerName="extract-utilities" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706686 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f3ccce-259a-43f4-883e-a8f278c34053" containerName="extract-utilities" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706700 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e212703-f85d-4128-bbff-a3057263d6d3" containerName="controller-manager" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706708 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e212703-f85d-4128-bbff-a3057263d6d3" containerName="controller-manager" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706717 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f3ccce-259a-43f4-883e-a8f278c34053" containerName="registry-server" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706724 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f3ccce-259a-43f4-883e-a8f278c34053" containerName="registry-server" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706735 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d8eb21-a98b-45c5-9406-8e5d64e59fa0" containerName="marketplace-operator" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706743 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d8eb21-a98b-45c5-9406-8e5d64e59fa0" containerName="marketplace-operator" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706752 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" containerName="extract-utilities" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706760 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" containerName="extract-utilities" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706772 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb4072b-8c81-4808-b3a6-9be9fc814060" containerName="extract-content" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706779 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb4072b-8c81-4808-b3a6-9be9fc814060" containerName="extract-content" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706790 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f3ccce-259a-43f4-883e-a8f278c34053" containerName="extract-content" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706802 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f3ccce-259a-43f4-883e-a8f278c34053" containerName="extract-content" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706810 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" containerName="registry-server" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706820 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" containerName="registry-server" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706830 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" containerName="extract-utilities" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706838 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" containerName="extract-utilities" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706848 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb4072b-8c81-4808-b3a6-9be9fc814060" containerName="extract-utilities" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706854 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb4072b-8c81-4808-b3a6-9be9fc814060" containerName="extract-utilities" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706862 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" containerName="extract-content" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706870 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" containerName="extract-content" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706879 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb4072b-8c81-4808-b3a6-9be9fc814060" containerName="registry-server" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706885 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb4072b-8c81-4808-b3a6-9be9fc814060" containerName="registry-server" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706924 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" containerName="registry-server" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706932 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" containerName="registry-server" Dec 04 06:14:19 crc kubenswrapper[4832]: E1204 06:14:19.706942 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" containerName="extract-content" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.706949 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" containerName="extract-content" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.707146 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e212703-f85d-4128-bbff-a3057263d6d3" containerName="controller-manager" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.707164 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d8eb21-a98b-45c5-9406-8e5d64e59fa0" containerName="marketplace-operator" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.707175 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b864fd5-dc62-4f52-b7e7-1bdeeca4e88e" containerName="registry-server" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.707185 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f4fe16-d42c-4aaf-9b33-4ab8f93e2930" containerName="registry-server" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.707197 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905" containerName="route-controller-manager" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.707207 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f3ccce-259a-43f4-883e-a8f278c34053" containerName="registry-server" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.707217 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb4072b-8c81-4808-b3a6-9be9fc814060" containerName="registry-server" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.707819 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.709270 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7"] Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.710168 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.711904 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.714321 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.714462 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.714642 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.714823 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.715615 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.716132 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.716386 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.716853 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.717862 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.718027 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.718744 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.721931 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7"] Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.722210 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.725172 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77f667dfdd-l6dvk"] Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.876520 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/366296f9-5827-474e-9ca3-feaafe67ab4f-client-ca\") pod \"route-controller-manager-7f9fc89966-2pnk7\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.876732 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366296f9-5827-474e-9ca3-feaafe67ab4f-config\") pod \"route-controller-manager-7f9fc89966-2pnk7\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.876947 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd2qf\" (UniqueName: \"kubernetes.io/projected/366296f9-5827-474e-9ca3-feaafe67ab4f-kube-api-access-dd2qf\") pod \"route-controller-manager-7f9fc89966-2pnk7\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.877049 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-proxy-ca-bundles\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.877127 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/366296f9-5827-474e-9ca3-feaafe67ab4f-serving-cert\") pod \"route-controller-manager-7f9fc89966-2pnk7\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.877247 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48be07b2-82ab-4a09-845c-1bd2c4556919-serving-cert\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.877310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-config\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.877367 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjlmg\" (UniqueName: \"kubernetes.io/projected/48be07b2-82ab-4a09-845c-1bd2c4556919-kube-api-access-cjlmg\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.877578 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-client-ca\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.979455 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd2qf\" (UniqueName: \"kubernetes.io/projected/366296f9-5827-474e-9ca3-feaafe67ab4f-kube-api-access-dd2qf\") pod \"route-controller-manager-7f9fc89966-2pnk7\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.979516 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-proxy-ca-bundles\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.979537 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/366296f9-5827-474e-9ca3-feaafe67ab4f-serving-cert\") pod \"route-controller-manager-7f9fc89966-2pnk7\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.979563 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48be07b2-82ab-4a09-845c-1bd2c4556919-serving-cert\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.979579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-config\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.979594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjlmg\" (UniqueName: \"kubernetes.io/projected/48be07b2-82ab-4a09-845c-1bd2c4556919-kube-api-access-cjlmg\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.979632 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-client-ca\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.979660 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/366296f9-5827-474e-9ca3-feaafe67ab4f-client-ca\") pod \"route-controller-manager-7f9fc89966-2pnk7\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.979681 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366296f9-5827-474e-9ca3-feaafe67ab4f-config\") pod \"route-controller-manager-7f9fc89966-2pnk7\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.981044 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-proxy-ca-bundles\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.981044 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-client-ca\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.981046 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/366296f9-5827-474e-9ca3-feaafe67ab4f-client-ca\") pod \"route-controller-manager-7f9fc89966-2pnk7\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.981147 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366296f9-5827-474e-9ca3-feaafe67ab4f-config\") pod \"route-controller-manager-7f9fc89966-2pnk7\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.981590 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-config\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.985991 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/366296f9-5827-474e-9ca3-feaafe67ab4f-serving-cert\") pod \"route-controller-manager-7f9fc89966-2pnk7\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.986043 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48be07b2-82ab-4a09-845c-1bd2c4556919-serving-cert\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.998866 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd2qf\" (UniqueName: \"kubernetes.io/projected/366296f9-5827-474e-9ca3-feaafe67ab4f-kube-api-access-dd2qf\") pod \"route-controller-manager-7f9fc89966-2pnk7\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:19 crc kubenswrapper[4832]: I1204 06:14:19.999005 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjlmg\" (UniqueName: \"kubernetes.io/projected/48be07b2-82ab-4a09-845c-1bd2c4556919-kube-api-access-cjlmg\") pod \"controller-manager-77f667dfdd-l6dvk\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.071813 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.081239 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.265347 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77f667dfdd-l6dvk"] Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.299217 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7"] Dec 04 06:14:20 crc kubenswrapper[4832]: W1204 06:14:20.306832 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod366296f9_5827_474e_9ca3_feaafe67ab4f.slice/crio-81b1f0913fc665e9603c88c71c0876142363e59984b18572cd119e7255286d06 WatchSource:0}: Error finding container 81b1f0913fc665e9603c88c71c0876142363e59984b18572cd119e7255286d06: Status 404 returned error can't find the container with id 81b1f0913fc665e9603c88c71c0876142363e59984b18572cd119e7255286d06 Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.717011 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e212703-f85d-4128-bbff-a3057263d6d3" path="/var/lib/kubelet/pods/3e212703-f85d-4128-bbff-a3057263d6d3/volumes" Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.717987 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905" path="/var/lib/kubelet/pods/d854fc2c-f4d8-4a3f-a9c9-06cc2ace9905/volumes" Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.899740 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" event={"ID":"48be07b2-82ab-4a09-845c-1bd2c4556919","Type":"ContainerStarted","Data":"2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f"} Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.899963 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.899987 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" event={"ID":"48be07b2-82ab-4a09-845c-1bd2c4556919","Type":"ContainerStarted","Data":"c5ec623f4e691a98f4502582200c7a3f6f21b0ca94c5b6311bb799de46fdaca3"} Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.901543 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" event={"ID":"366296f9-5827-474e-9ca3-feaafe67ab4f","Type":"ContainerStarted","Data":"4cf18f7032d9711041bef56fd348c484c90003cca05eb80d206b6401e80ca09b"} Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.901605 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" event={"ID":"366296f9-5827-474e-9ca3-feaafe67ab4f","Type":"ContainerStarted","Data":"81b1f0913fc665e9603c88c71c0876142363e59984b18572cd119e7255286d06"} Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.901775 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.918790 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.968197 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" podStartSLOduration=3.968174554 podStartE2EDuration="3.968174554s" podCreationTimestamp="2025-12-04 06:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:14:20.936283464 +0000 UTC m=+316.549101180" watchObservedRunningTime="2025-12-04 06:14:20.968174554 +0000 UTC m=+316.580992250" Dec 04 06:14:20 crc kubenswrapper[4832]: I1204 06:14:20.998248 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" podStartSLOduration=3.998218805 podStartE2EDuration="3.998218805s" podCreationTimestamp="2025-12-04 06:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:14:20.994852967 +0000 UTC m=+316.607670683" watchObservedRunningTime="2025-12-04 06:14:20.998218805 +0000 UTC m=+316.611036521" Dec 04 06:14:21 crc kubenswrapper[4832]: I1204 06:14:21.170813 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:35 crc kubenswrapper[4832]: I1204 06:14:35.362511 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:14:35 crc kubenswrapper[4832]: I1204 06:14:35.363141 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.075935 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7dfrh"] Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.078207 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.080498 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.091256 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7dfrh"] Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.224876 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d383066c-be25-44c6-854b-0d57c0e91e6b-utilities\") pod \"redhat-operators-7dfrh\" (UID: \"d383066c-be25-44c6-854b-0d57c0e91e6b\") " pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.225154 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jqcf\" (UniqueName: \"kubernetes.io/projected/d383066c-be25-44c6-854b-0d57c0e91e6b-kube-api-access-5jqcf\") pod \"redhat-operators-7dfrh\" (UID: \"d383066c-be25-44c6-854b-0d57c0e91e6b\") " pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.225332 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d383066c-be25-44c6-854b-0d57c0e91e6b-catalog-content\") pod \"redhat-operators-7dfrh\" (UID: \"d383066c-be25-44c6-854b-0d57c0e91e6b\") " pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.275225 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xqwjd"] Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.276550 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.279167 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.295164 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqwjd"] Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.326821 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d383066c-be25-44c6-854b-0d57c0e91e6b-catalog-content\") pod \"redhat-operators-7dfrh\" (UID: \"d383066c-be25-44c6-854b-0d57c0e91e6b\") " pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.326906 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d383066c-be25-44c6-854b-0d57c0e91e6b-utilities\") pod \"redhat-operators-7dfrh\" (UID: \"d383066c-be25-44c6-854b-0d57c0e91e6b\") " pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.326963 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d017b88-ca36-417e-9f64-051bd0819f20-utilities\") pod \"redhat-marketplace-xqwjd\" (UID: \"4d017b88-ca36-417e-9f64-051bd0819f20\") " pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.326995 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2mfp\" (UniqueName: \"kubernetes.io/projected/4d017b88-ca36-417e-9f64-051bd0819f20-kube-api-access-x2mfp\") pod \"redhat-marketplace-xqwjd\" (UID: \"4d017b88-ca36-417e-9f64-051bd0819f20\") " pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.327046 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d017b88-ca36-417e-9f64-051bd0819f20-catalog-content\") pod \"redhat-marketplace-xqwjd\" (UID: \"4d017b88-ca36-417e-9f64-051bd0819f20\") " pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.327282 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jqcf\" (UniqueName: \"kubernetes.io/projected/d383066c-be25-44c6-854b-0d57c0e91e6b-kube-api-access-5jqcf\") pod \"redhat-operators-7dfrh\" (UID: \"d383066c-be25-44c6-854b-0d57c0e91e6b\") " pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.327337 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d383066c-be25-44c6-854b-0d57c0e91e6b-catalog-content\") pod \"redhat-operators-7dfrh\" (UID: \"d383066c-be25-44c6-854b-0d57c0e91e6b\") " pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.327379 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d383066c-be25-44c6-854b-0d57c0e91e6b-utilities\") pod \"redhat-operators-7dfrh\" (UID: \"d383066c-be25-44c6-854b-0d57c0e91e6b\") " pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.346298 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jqcf\" (UniqueName: \"kubernetes.io/projected/d383066c-be25-44c6-854b-0d57c0e91e6b-kube-api-access-5jqcf\") pod \"redhat-operators-7dfrh\" (UID: \"d383066c-be25-44c6-854b-0d57c0e91e6b\") " pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.400211 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.429614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d017b88-ca36-417e-9f64-051bd0819f20-utilities\") pod \"redhat-marketplace-xqwjd\" (UID: \"4d017b88-ca36-417e-9f64-051bd0819f20\") " pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.429692 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2mfp\" (UniqueName: \"kubernetes.io/projected/4d017b88-ca36-417e-9f64-051bd0819f20-kube-api-access-x2mfp\") pod \"redhat-marketplace-xqwjd\" (UID: \"4d017b88-ca36-417e-9f64-051bd0819f20\") " pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.430952 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d017b88-ca36-417e-9f64-051bd0819f20-utilities\") pod \"redhat-marketplace-xqwjd\" (UID: \"4d017b88-ca36-417e-9f64-051bd0819f20\") " pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.430993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d017b88-ca36-417e-9f64-051bd0819f20-catalog-content\") pod \"redhat-marketplace-xqwjd\" (UID: \"4d017b88-ca36-417e-9f64-051bd0819f20\") " pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.431086 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d017b88-ca36-417e-9f64-051bd0819f20-catalog-content\") pod \"redhat-marketplace-xqwjd\" (UID: \"4d017b88-ca36-417e-9f64-051bd0819f20\") " pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.457129 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2mfp\" (UniqueName: \"kubernetes.io/projected/4d017b88-ca36-417e-9f64-051bd0819f20-kube-api-access-x2mfp\") pod \"redhat-marketplace-xqwjd\" (UID: \"4d017b88-ca36-417e-9f64-051bd0819f20\") " pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.593934 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:43 crc kubenswrapper[4832]: I1204 06:14:43.820220 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7dfrh"] Dec 04 06:14:44 crc kubenswrapper[4832]: I1204 06:14:44.009427 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqwjd"] Dec 04 06:14:44 crc kubenswrapper[4832]: I1204 06:14:44.022213 4832 generic.go:334] "Generic (PLEG): container finished" podID="d383066c-be25-44c6-854b-0d57c0e91e6b" containerID="d88a72e38d3cca761851c3ee13cbc2a0a688870d27366c549abce372b4a06477" exitCode=0 Dec 04 06:14:44 crc kubenswrapper[4832]: I1204 06:14:44.022257 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dfrh" event={"ID":"d383066c-be25-44c6-854b-0d57c0e91e6b","Type":"ContainerDied","Data":"d88a72e38d3cca761851c3ee13cbc2a0a688870d27366c549abce372b4a06477"} Dec 04 06:14:44 crc kubenswrapper[4832]: I1204 06:14:44.022286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dfrh" event={"ID":"d383066c-be25-44c6-854b-0d57c0e91e6b","Type":"ContainerStarted","Data":"6a08e20786a150dc35c8786c4c80cf479535b9a772db010e4744e0017f496101"} Dec 04 06:14:44 crc kubenswrapper[4832]: W1204 06:14:44.060103 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d017b88_ca36_417e_9f64_051bd0819f20.slice/crio-9c4e6a3cf71b0892f1f701604403598cb73d5e086a870a81fed37e6730145e73 WatchSource:0}: Error finding container 9c4e6a3cf71b0892f1f701604403598cb73d5e086a870a81fed37e6730145e73: Status 404 returned error can't find the container with id 9c4e6a3cf71b0892f1f701604403598cb73d5e086a870a81fed37e6730145e73 Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.029627 4832 generic.go:334] "Generic (PLEG): container finished" podID="4d017b88-ca36-417e-9f64-051bd0819f20" containerID="f964a0d13e2e672bcd7940a148eae021a1299789be0fd11a32ada29902a48499" exitCode=0 Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.029731 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqwjd" event={"ID":"4d017b88-ca36-417e-9f64-051bd0819f20","Type":"ContainerDied","Data":"f964a0d13e2e672bcd7940a148eae021a1299789be0fd11a32ada29902a48499"} Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.029787 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqwjd" event={"ID":"4d017b88-ca36-417e-9f64-051bd0819f20","Type":"ContainerStarted","Data":"9c4e6a3cf71b0892f1f701604403598cb73d5e086a870a81fed37e6730145e73"} Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.031619 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dfrh" event={"ID":"d383066c-be25-44c6-854b-0d57c0e91e6b","Type":"ContainerStarted","Data":"c56e2e928bfbe870cbc32e34ee8bf9ce060e1004b6e96c8d9ea92abf4b554b3b"} Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.672866 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dl7h7"] Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.674650 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.677011 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.681355 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dl7h7"] Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.764913 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc5qr\" (UniqueName: \"kubernetes.io/projected/9ed7e241-4b9d-42f9-b2de-ee72694a5ba2-kube-api-access-xc5qr\") pod \"community-operators-dl7h7\" (UID: \"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2\") " pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.765020 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed7e241-4b9d-42f9-b2de-ee72694a5ba2-utilities\") pod \"community-operators-dl7h7\" (UID: \"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2\") " pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.765082 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed7e241-4b9d-42f9-b2de-ee72694a5ba2-catalog-content\") pod \"community-operators-dl7h7\" (UID: \"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2\") " pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.866715 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc5qr\" (UniqueName: \"kubernetes.io/projected/9ed7e241-4b9d-42f9-b2de-ee72694a5ba2-kube-api-access-xc5qr\") pod \"community-operators-dl7h7\" (UID: \"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2\") " pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.868140 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed7e241-4b9d-42f9-b2de-ee72694a5ba2-utilities\") pod \"community-operators-dl7h7\" (UID: \"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2\") " pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.868216 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed7e241-4b9d-42f9-b2de-ee72694a5ba2-utilities\") pod \"community-operators-dl7h7\" (UID: \"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2\") " pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.868335 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed7e241-4b9d-42f9-b2de-ee72694a5ba2-catalog-content\") pod \"community-operators-dl7h7\" (UID: \"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2\") " pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.868678 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed7e241-4b9d-42f9-b2de-ee72694a5ba2-catalog-content\") pod \"community-operators-dl7h7\" (UID: \"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2\") " pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.876352 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7dhwr"] Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.877692 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.880061 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.893575 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dhwr"] Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.899258 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc5qr\" (UniqueName: \"kubernetes.io/projected/9ed7e241-4b9d-42f9-b2de-ee72694a5ba2-kube-api-access-xc5qr\") pod \"community-operators-dl7h7\" (UID: \"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2\") " pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.969410 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qkx6\" (UniqueName: \"kubernetes.io/projected/6c6758d5-9eeb-4895-9e0e-d4364556afc0-kube-api-access-2qkx6\") pod \"certified-operators-7dhwr\" (UID: \"6c6758d5-9eeb-4895-9e0e-d4364556afc0\") " pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.969457 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c6758d5-9eeb-4895-9e0e-d4364556afc0-catalog-content\") pod \"certified-operators-7dhwr\" (UID: \"6c6758d5-9eeb-4895-9e0e-d4364556afc0\") " pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.969484 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c6758d5-9eeb-4895-9e0e-d4364556afc0-utilities\") pod \"certified-operators-7dhwr\" (UID: \"6c6758d5-9eeb-4895-9e0e-d4364556afc0\") " pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:45 crc kubenswrapper[4832]: I1204 06:14:45.990947 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.045078 4832 generic.go:334] "Generic (PLEG): container finished" podID="d383066c-be25-44c6-854b-0d57c0e91e6b" containerID="c56e2e928bfbe870cbc32e34ee8bf9ce060e1004b6e96c8d9ea92abf4b554b3b" exitCode=0 Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.045155 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dfrh" event={"ID":"d383066c-be25-44c6-854b-0d57c0e91e6b","Type":"ContainerDied","Data":"c56e2e928bfbe870cbc32e34ee8bf9ce060e1004b6e96c8d9ea92abf4b554b3b"} Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.048178 4832 generic.go:334] "Generic (PLEG): container finished" podID="4d017b88-ca36-417e-9f64-051bd0819f20" containerID="70d5ca957ca106cfbcdc4ea62ffcb927bc8583fbc76a8497b7c175cb4d040ef5" exitCode=0 Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.048213 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqwjd" event={"ID":"4d017b88-ca36-417e-9f64-051bd0819f20","Type":"ContainerDied","Data":"70d5ca957ca106cfbcdc4ea62ffcb927bc8583fbc76a8497b7c175cb4d040ef5"} Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.070503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c6758d5-9eeb-4895-9e0e-d4364556afc0-utilities\") pod \"certified-operators-7dhwr\" (UID: \"6c6758d5-9eeb-4895-9e0e-d4364556afc0\") " pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.070620 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qkx6\" (UniqueName: \"kubernetes.io/projected/6c6758d5-9eeb-4895-9e0e-d4364556afc0-kube-api-access-2qkx6\") pod \"certified-operators-7dhwr\" (UID: \"6c6758d5-9eeb-4895-9e0e-d4364556afc0\") " pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.070648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c6758d5-9eeb-4895-9e0e-d4364556afc0-catalog-content\") pod \"certified-operators-7dhwr\" (UID: \"6c6758d5-9eeb-4895-9e0e-d4364556afc0\") " pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.071214 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c6758d5-9eeb-4895-9e0e-d4364556afc0-utilities\") pod \"certified-operators-7dhwr\" (UID: \"6c6758d5-9eeb-4895-9e0e-d4364556afc0\") " pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.071274 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c6758d5-9eeb-4895-9e0e-d4364556afc0-catalog-content\") pod \"certified-operators-7dhwr\" (UID: \"6c6758d5-9eeb-4895-9e0e-d4364556afc0\") " pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.091158 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qkx6\" (UniqueName: \"kubernetes.io/projected/6c6758d5-9eeb-4895-9e0e-d4364556afc0-kube-api-access-2qkx6\") pod \"certified-operators-7dhwr\" (UID: \"6c6758d5-9eeb-4895-9e0e-d4364556afc0\") " pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.195342 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.445336 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dl7h7"] Dec 04 06:14:46 crc kubenswrapper[4832]: I1204 06:14:46.600000 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dhwr"] Dec 04 06:14:46 crc kubenswrapper[4832]: W1204 06:14:46.610118 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c6758d5_9eeb_4895_9e0e_d4364556afc0.slice/crio-8b1c775e9469b12bdcc3a459e8bbd9fc07c6dea730edc371d735e232321e94c0 WatchSource:0}: Error finding container 8b1c775e9469b12bdcc3a459e8bbd9fc07c6dea730edc371d735e232321e94c0: Status 404 returned error can't find the container with id 8b1c775e9469b12bdcc3a459e8bbd9fc07c6dea730edc371d735e232321e94c0 Dec 04 06:14:47 crc kubenswrapper[4832]: I1204 06:14:47.056873 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ed7e241-4b9d-42f9-b2de-ee72694a5ba2" containerID="1eb1b29a4a71464d6a612f9b5e5c83b04ade7b97c1ec56b31197e394230883ab" exitCode=0 Dec 04 06:14:47 crc kubenswrapper[4832]: I1204 06:14:47.056995 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dl7h7" event={"ID":"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2","Type":"ContainerDied","Data":"1eb1b29a4a71464d6a612f9b5e5c83b04ade7b97c1ec56b31197e394230883ab"} Dec 04 06:14:47 crc kubenswrapper[4832]: I1204 06:14:47.057042 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dl7h7" event={"ID":"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2","Type":"ContainerStarted","Data":"0cada6f45d3d9a84f18c52a0b9b6691dddc43695734d3f73c66d823047d7122c"} Dec 04 06:14:47 crc kubenswrapper[4832]: I1204 06:14:47.058988 4832 generic.go:334] "Generic (PLEG): container finished" podID="6c6758d5-9eeb-4895-9e0e-d4364556afc0" containerID="88003f12adcf14c0d2826a9b083326d4d37a35f3f5eb8c86acf3a2211274d219" exitCode=0 Dec 04 06:14:47 crc kubenswrapper[4832]: I1204 06:14:47.059022 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dhwr" event={"ID":"6c6758d5-9eeb-4895-9e0e-d4364556afc0","Type":"ContainerDied","Data":"88003f12adcf14c0d2826a9b083326d4d37a35f3f5eb8c86acf3a2211274d219"} Dec 04 06:14:47 crc kubenswrapper[4832]: I1204 06:14:47.059066 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dhwr" event={"ID":"6c6758d5-9eeb-4895-9e0e-d4364556afc0","Type":"ContainerStarted","Data":"8b1c775e9469b12bdcc3a459e8bbd9fc07c6dea730edc371d735e232321e94c0"} Dec 04 06:14:47 crc kubenswrapper[4832]: I1204 06:14:47.062721 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqwjd" event={"ID":"4d017b88-ca36-417e-9f64-051bd0819f20","Type":"ContainerStarted","Data":"2e94a8527729fcf4a2d0f88f1a986b14e88c9bdfa8c23de93ff5e90f9eb2721c"} Dec 04 06:14:47 crc kubenswrapper[4832]: I1204 06:14:47.066060 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dfrh" event={"ID":"d383066c-be25-44c6-854b-0d57c0e91e6b","Type":"ContainerStarted","Data":"4b694fdc62f9650612db824c93acf89b76aa911405c1d8cffd5c512bd6c2bae9"} Dec 04 06:14:47 crc kubenswrapper[4832]: I1204 06:14:47.123468 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xqwjd" podStartSLOduration=2.699596416 podStartE2EDuration="4.123448888s" podCreationTimestamp="2025-12-04 06:14:43 +0000 UTC" firstStartedPulling="2025-12-04 06:14:45.032510607 +0000 UTC m=+340.645328303" lastFinishedPulling="2025-12-04 06:14:46.456363069 +0000 UTC m=+342.069180775" observedRunningTime="2025-12-04 06:14:47.120693336 +0000 UTC m=+342.733511042" watchObservedRunningTime="2025-12-04 06:14:47.123448888 +0000 UTC m=+342.736266594" Dec 04 06:14:47 crc kubenswrapper[4832]: I1204 06:14:47.150268 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7dfrh" podStartSLOduration=1.706301925 podStartE2EDuration="4.150245934s" podCreationTimestamp="2025-12-04 06:14:43 +0000 UTC" firstStartedPulling="2025-12-04 06:14:44.025702329 +0000 UTC m=+339.638520035" lastFinishedPulling="2025-12-04 06:14:46.469646338 +0000 UTC m=+342.082464044" observedRunningTime="2025-12-04 06:14:47.148505958 +0000 UTC m=+342.761323664" watchObservedRunningTime="2025-12-04 06:14:47.150245934 +0000 UTC m=+342.763063640" Dec 04 06:14:48 crc kubenswrapper[4832]: I1204 06:14:48.072457 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dl7h7" event={"ID":"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2","Type":"ContainerStarted","Data":"74a02683e5c924f3ea3cc8f1efc41f242698af7a0c94de38aed641080c868b6b"} Dec 04 06:14:48 crc kubenswrapper[4832]: I1204 06:14:48.073879 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dhwr" event={"ID":"6c6758d5-9eeb-4895-9e0e-d4364556afc0","Type":"ContainerStarted","Data":"c364377156e8374bf3fe12b48c963cfe789f5f88c24e91a15ee52347e86df94a"} Dec 04 06:14:49 crc kubenswrapper[4832]: I1204 06:14:49.088146 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ed7e241-4b9d-42f9-b2de-ee72694a5ba2" containerID="74a02683e5c924f3ea3cc8f1efc41f242698af7a0c94de38aed641080c868b6b" exitCode=0 Dec 04 06:14:49 crc kubenswrapper[4832]: I1204 06:14:49.088267 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dl7h7" event={"ID":"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2","Type":"ContainerDied","Data":"74a02683e5c924f3ea3cc8f1efc41f242698af7a0c94de38aed641080c868b6b"} Dec 04 06:14:49 crc kubenswrapper[4832]: I1204 06:14:49.090334 4832 generic.go:334] "Generic (PLEG): container finished" podID="6c6758d5-9eeb-4895-9e0e-d4364556afc0" containerID="c364377156e8374bf3fe12b48c963cfe789f5f88c24e91a15ee52347e86df94a" exitCode=0 Dec 04 06:14:49 crc kubenswrapper[4832]: I1204 06:14:49.090366 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dhwr" event={"ID":"6c6758d5-9eeb-4895-9e0e-d4364556afc0","Type":"ContainerDied","Data":"c364377156e8374bf3fe12b48c963cfe789f5f88c24e91a15ee52347e86df94a"} Dec 04 06:14:50 crc kubenswrapper[4832]: I1204 06:14:50.100041 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dl7h7" event={"ID":"9ed7e241-4b9d-42f9-b2de-ee72694a5ba2","Type":"ContainerStarted","Data":"b9a6872664f6cf6457dbc22cfa55ae40cdb5476e31cca4c954ef0d1173bb0fe3"} Dec 04 06:14:50 crc kubenswrapper[4832]: I1204 06:14:50.102755 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dhwr" event={"ID":"6c6758d5-9eeb-4895-9e0e-d4364556afc0","Type":"ContainerStarted","Data":"5b4414710caf7ca7795fa8eace4633fc1e66595eb8c17a789919c2d42d58d61f"} Dec 04 06:14:50 crc kubenswrapper[4832]: I1204 06:14:50.123528 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dl7h7" podStartSLOduration=2.6668916019999998 podStartE2EDuration="5.123507635s" podCreationTimestamp="2025-12-04 06:14:45 +0000 UTC" firstStartedPulling="2025-12-04 06:14:47.058733844 +0000 UTC m=+342.671551550" lastFinishedPulling="2025-12-04 06:14:49.515349877 +0000 UTC m=+345.128167583" observedRunningTime="2025-12-04 06:14:50.121182083 +0000 UTC m=+345.733999799" watchObservedRunningTime="2025-12-04 06:14:50.123507635 +0000 UTC m=+345.736325351" Dec 04 06:14:50 crc kubenswrapper[4832]: I1204 06:14:50.156232 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7dhwr" podStartSLOduration=2.680449129 podStartE2EDuration="5.156208776s" podCreationTimestamp="2025-12-04 06:14:45 +0000 UTC" firstStartedPulling="2025-12-04 06:14:47.060190292 +0000 UTC m=+342.673007998" lastFinishedPulling="2025-12-04 06:14:49.535949939 +0000 UTC m=+345.148767645" observedRunningTime="2025-12-04 06:14:50.150074805 +0000 UTC m=+345.762892511" watchObservedRunningTime="2025-12-04 06:14:50.156208776 +0000 UTC m=+345.769026472" Dec 04 06:14:53 crc kubenswrapper[4832]: I1204 06:14:53.400382 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:53 crc kubenswrapper[4832]: I1204 06:14:53.400794 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:53 crc kubenswrapper[4832]: I1204 06:14:53.445299 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:53 crc kubenswrapper[4832]: I1204 06:14:53.594541 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:53 crc kubenswrapper[4832]: I1204 06:14:53.594714 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:53 crc kubenswrapper[4832]: I1204 06:14:53.632567 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:54 crc kubenswrapper[4832]: I1204 06:14:54.171070 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xqwjd" Dec 04 06:14:54 crc kubenswrapper[4832]: I1204 06:14:54.175752 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7dfrh" Dec 04 06:14:55 crc kubenswrapper[4832]: I1204 06:14:55.991355 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:55 crc kubenswrapper[4832]: I1204 06:14:55.991747 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:56 crc kubenswrapper[4832]: I1204 06:14:56.031568 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:56 crc kubenswrapper[4832]: I1204 06:14:56.177946 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dl7h7" Dec 04 06:14:56 crc kubenswrapper[4832]: I1204 06:14:56.196519 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:56 crc kubenswrapper[4832]: I1204 06:14:56.196598 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:56 crc kubenswrapper[4832]: I1204 06:14:56.241628 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:57 crc kubenswrapper[4832]: I1204 06:14:57.177365 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7dhwr" Dec 04 06:14:57 crc kubenswrapper[4832]: I1204 06:14:57.513059 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7"] Dec 04 06:14:57 crc kubenswrapper[4832]: I1204 06:14:57.513322 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" podUID="366296f9-5827-474e-9ca3-feaafe67ab4f" containerName="route-controller-manager" containerID="cri-o://4cf18f7032d9711041bef56fd348c484c90003cca05eb80d206b6401e80ca09b" gracePeriod=30 Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.156324 4832 generic.go:334] "Generic (PLEG): container finished" podID="366296f9-5827-474e-9ca3-feaafe67ab4f" containerID="4cf18f7032d9711041bef56fd348c484c90003cca05eb80d206b6401e80ca09b" exitCode=0 Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.156460 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" event={"ID":"366296f9-5827-474e-9ca3-feaafe67ab4f","Type":"ContainerDied","Data":"4cf18f7032d9711041bef56fd348c484c90003cca05eb80d206b6401e80ca09b"} Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.741232 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.775719 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5"] Dec 04 06:14:59 crc kubenswrapper[4832]: E1204 06:14:59.776300 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366296f9-5827-474e-9ca3-feaafe67ab4f" containerName="route-controller-manager" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.776399 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="366296f9-5827-474e-9ca3-feaafe67ab4f" containerName="route-controller-manager" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.777214 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="366296f9-5827-474e-9ca3-feaafe67ab4f" containerName="route-controller-manager" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.777877 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.796269 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5"] Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.863239 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd2qf\" (UniqueName: \"kubernetes.io/projected/366296f9-5827-474e-9ca3-feaafe67ab4f-kube-api-access-dd2qf\") pod \"366296f9-5827-474e-9ca3-feaafe67ab4f\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.863359 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/366296f9-5827-474e-9ca3-feaafe67ab4f-serving-cert\") pod \"366296f9-5827-474e-9ca3-feaafe67ab4f\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.863443 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/366296f9-5827-474e-9ca3-feaafe67ab4f-client-ca\") pod \"366296f9-5827-474e-9ca3-feaafe67ab4f\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.863497 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366296f9-5827-474e-9ca3-feaafe67ab4f-config\") pod \"366296f9-5827-474e-9ca3-feaafe67ab4f\" (UID: \"366296f9-5827-474e-9ca3-feaafe67ab4f\") " Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.863672 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6fb458a-5153-42af-b771-e7ea12c970ee-client-ca\") pod \"route-controller-manager-55d668d8f8-zhrg5\" (UID: \"d6fb458a-5153-42af-b771-e7ea12c970ee\") " pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.863757 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k64kw\" (UniqueName: \"kubernetes.io/projected/d6fb458a-5153-42af-b771-e7ea12c970ee-kube-api-access-k64kw\") pod \"route-controller-manager-55d668d8f8-zhrg5\" (UID: \"d6fb458a-5153-42af-b771-e7ea12c970ee\") " pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.863778 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fb458a-5153-42af-b771-e7ea12c970ee-serving-cert\") pod \"route-controller-manager-55d668d8f8-zhrg5\" (UID: \"d6fb458a-5153-42af-b771-e7ea12c970ee\") " pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.863837 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6fb458a-5153-42af-b771-e7ea12c970ee-config\") pod \"route-controller-manager-55d668d8f8-zhrg5\" (UID: \"d6fb458a-5153-42af-b771-e7ea12c970ee\") " pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.864280 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/366296f9-5827-474e-9ca3-feaafe67ab4f-client-ca" (OuterVolumeSpecName: "client-ca") pod "366296f9-5827-474e-9ca3-feaafe67ab4f" (UID: "366296f9-5827-474e-9ca3-feaafe67ab4f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.864914 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/366296f9-5827-474e-9ca3-feaafe67ab4f-config" (OuterVolumeSpecName: "config") pod "366296f9-5827-474e-9ca3-feaafe67ab4f" (UID: "366296f9-5827-474e-9ca3-feaafe67ab4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.869917 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/366296f9-5827-474e-9ca3-feaafe67ab4f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "366296f9-5827-474e-9ca3-feaafe67ab4f" (UID: "366296f9-5827-474e-9ca3-feaafe67ab4f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.870627 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/366296f9-5827-474e-9ca3-feaafe67ab4f-kube-api-access-dd2qf" (OuterVolumeSpecName: "kube-api-access-dd2qf") pod "366296f9-5827-474e-9ca3-feaafe67ab4f" (UID: "366296f9-5827-474e-9ca3-feaafe67ab4f"). InnerVolumeSpecName "kube-api-access-dd2qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.965576 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k64kw\" (UniqueName: \"kubernetes.io/projected/d6fb458a-5153-42af-b771-e7ea12c970ee-kube-api-access-k64kw\") pod \"route-controller-manager-55d668d8f8-zhrg5\" (UID: \"d6fb458a-5153-42af-b771-e7ea12c970ee\") " pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.965654 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fb458a-5153-42af-b771-e7ea12c970ee-serving-cert\") pod \"route-controller-manager-55d668d8f8-zhrg5\" (UID: \"d6fb458a-5153-42af-b771-e7ea12c970ee\") " pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.965700 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6fb458a-5153-42af-b771-e7ea12c970ee-config\") pod \"route-controller-manager-55d668d8f8-zhrg5\" (UID: \"d6fb458a-5153-42af-b771-e7ea12c970ee\") " pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.965746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6fb458a-5153-42af-b771-e7ea12c970ee-client-ca\") pod \"route-controller-manager-55d668d8f8-zhrg5\" (UID: \"d6fb458a-5153-42af-b771-e7ea12c970ee\") " pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.965815 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/366296f9-5827-474e-9ca3-feaafe67ab4f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.965828 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/366296f9-5827-474e-9ca3-feaafe67ab4f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.965838 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366296f9-5827-474e-9ca3-feaafe67ab4f-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.965847 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd2qf\" (UniqueName: \"kubernetes.io/projected/366296f9-5827-474e-9ca3-feaafe67ab4f-kube-api-access-dd2qf\") on node \"crc\" DevicePath \"\"" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.969287 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6fb458a-5153-42af-b771-e7ea12c970ee-client-ca\") pod \"route-controller-manager-55d668d8f8-zhrg5\" (UID: \"d6fb458a-5153-42af-b771-e7ea12c970ee\") " pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.970377 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fb458a-5153-42af-b771-e7ea12c970ee-serving-cert\") pod \"route-controller-manager-55d668d8f8-zhrg5\" (UID: \"d6fb458a-5153-42af-b771-e7ea12c970ee\") " pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.970778 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6fb458a-5153-42af-b771-e7ea12c970ee-config\") pod \"route-controller-manager-55d668d8f8-zhrg5\" (UID: \"d6fb458a-5153-42af-b771-e7ea12c970ee\") " pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:14:59 crc kubenswrapper[4832]: I1204 06:14:59.983551 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k64kw\" (UniqueName: \"kubernetes.io/projected/d6fb458a-5153-42af-b771-e7ea12c970ee-kube-api-access-k64kw\") pod \"route-controller-manager-55d668d8f8-zhrg5\" (UID: \"d6fb458a-5153-42af-b771-e7ea12c970ee\") " pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.097765 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.169317 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" event={"ID":"366296f9-5827-474e-9ca3-feaafe67ab4f","Type":"ContainerDied","Data":"81b1f0913fc665e9603c88c71c0876142363e59984b18572cd119e7255286d06"} Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.169516 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.171592 4832 scope.go:117] "RemoveContainer" containerID="4cf18f7032d9711041bef56fd348c484c90003cca05eb80d206b6401e80ca09b" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.193013 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4"] Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.194148 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.197650 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.199373 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.239351 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4"] Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.257015 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7"] Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.261077 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9fc89966-2pnk7"] Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.269307 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-secret-volume\") pod \"collect-profiles-29413815-xcrv4\" (UID: \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.269376 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqhts\" (UniqueName: \"kubernetes.io/projected/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-kube-api-access-mqhts\") pod \"collect-profiles-29413815-xcrv4\" (UID: \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.269436 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-config-volume\") pod \"collect-profiles-29413815-xcrv4\" (UID: \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.370237 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqhts\" (UniqueName: \"kubernetes.io/projected/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-kube-api-access-mqhts\") pod \"collect-profiles-29413815-xcrv4\" (UID: \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.370305 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-config-volume\") pod \"collect-profiles-29413815-xcrv4\" (UID: \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.370357 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-secret-volume\") pod \"collect-profiles-29413815-xcrv4\" (UID: \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.373097 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-config-volume\") pod \"collect-profiles-29413815-xcrv4\" (UID: \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.374460 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-secret-volume\") pod \"collect-profiles-29413815-xcrv4\" (UID: \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.391322 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqhts\" (UniqueName: \"kubernetes.io/projected/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-kube-api-access-mqhts\") pod \"collect-profiles-29413815-xcrv4\" (UID: \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.543369 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.578435 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5"] Dec 04 06:15:00 crc kubenswrapper[4832]: W1204 06:15:00.580804 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6fb458a_5153_42af_b771_e7ea12c970ee.slice/crio-a85c1372b953f1b0dfd7dc7ff304095b13ad4dab96111484903062b7cf85bec8 WatchSource:0}: Error finding container a85c1372b953f1b0dfd7dc7ff304095b13ad4dab96111484903062b7cf85bec8: Status 404 returned error can't find the container with id a85c1372b953f1b0dfd7dc7ff304095b13ad4dab96111484903062b7cf85bec8 Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.717781 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="366296f9-5827-474e-9ca3-feaafe67ab4f" path="/var/lib/kubelet/pods/366296f9-5827-474e-9ca3-feaafe67ab4f/volumes" Dec 04 06:15:00 crc kubenswrapper[4832]: I1204 06:15:00.947548 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4"] Dec 04 06:15:00 crc kubenswrapper[4832]: W1204 06:15:00.957152 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bd3393f_7bbf_4a54_a45c_5f206912dd1d.slice/crio-eb3d5f7924dae14ee26cb2ecc721c190a70232722b1bcb6fbd30e612321091ca WatchSource:0}: Error finding container eb3d5f7924dae14ee26cb2ecc721c190a70232722b1bcb6fbd30e612321091ca: Status 404 returned error can't find the container with id eb3d5f7924dae14ee26cb2ecc721c190a70232722b1bcb6fbd30e612321091ca Dec 04 06:15:01 crc kubenswrapper[4832]: I1204 06:15:01.177137 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" event={"ID":"0bd3393f-7bbf-4a54-a45c-5f206912dd1d","Type":"ContainerStarted","Data":"eb3d5f7924dae14ee26cb2ecc721c190a70232722b1bcb6fbd30e612321091ca"} Dec 04 06:15:01 crc kubenswrapper[4832]: I1204 06:15:01.178328 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" event={"ID":"d6fb458a-5153-42af-b771-e7ea12c970ee","Type":"ContainerStarted","Data":"c2a1045665f134ff80ae8d3c3499a2da6b48bbe6bd6d533124807ee9afa4e35d"} Dec 04 06:15:01 crc kubenswrapper[4832]: I1204 06:15:01.178354 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" event={"ID":"d6fb458a-5153-42af-b771-e7ea12c970ee","Type":"ContainerStarted","Data":"a85c1372b953f1b0dfd7dc7ff304095b13ad4dab96111484903062b7cf85bec8"} Dec 04 06:15:01 crc kubenswrapper[4832]: I1204 06:15:01.179595 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:15:01 crc kubenswrapper[4832]: I1204 06:15:01.197103 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" podStartSLOduration=4.197082644 podStartE2EDuration="4.197082644s" podCreationTimestamp="2025-12-04 06:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:15:01.196010485 +0000 UTC m=+356.808828191" watchObservedRunningTime="2025-12-04 06:15:01.197082644 +0000 UTC m=+356.809900350" Dec 04 06:15:01 crc kubenswrapper[4832]: I1204 06:15:01.227532 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55d668d8f8-zhrg5" Dec 04 06:15:02 crc kubenswrapper[4832]: I1204 06:15:02.187106 4832 generic.go:334] "Generic (PLEG): container finished" podID="0bd3393f-7bbf-4a54-a45c-5f206912dd1d" containerID="c0e7c65b69ea52b65e48bd81b4a8679fa63d8afc0d84346e84d53c95c9dab11e" exitCode=0 Dec 04 06:15:02 crc kubenswrapper[4832]: I1204 06:15:02.187234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" event={"ID":"0bd3393f-7bbf-4a54-a45c-5f206912dd1d","Type":"ContainerDied","Data":"c0e7c65b69ea52b65e48bd81b4a8679fa63d8afc0d84346e84d53c95c9dab11e"} Dec 04 06:15:03 crc kubenswrapper[4832]: I1204 06:15:03.512086 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:03 crc kubenswrapper[4832]: I1204 06:15:03.614939 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-secret-volume\") pod \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\" (UID: \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\") " Dec 04 06:15:03 crc kubenswrapper[4832]: I1204 06:15:03.615029 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-config-volume\") pod \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\" (UID: \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\") " Dec 04 06:15:03 crc kubenswrapper[4832]: I1204 06:15:03.615069 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqhts\" (UniqueName: \"kubernetes.io/projected/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-kube-api-access-mqhts\") pod \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\" (UID: \"0bd3393f-7bbf-4a54-a45c-5f206912dd1d\") " Dec 04 06:15:03 crc kubenswrapper[4832]: I1204 06:15:03.615753 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-config-volume" (OuterVolumeSpecName: "config-volume") pod "0bd3393f-7bbf-4a54-a45c-5f206912dd1d" (UID: "0bd3393f-7bbf-4a54-a45c-5f206912dd1d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:15:03 crc kubenswrapper[4832]: I1204 06:15:03.616608 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:03 crc kubenswrapper[4832]: I1204 06:15:03.623036 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-kube-api-access-mqhts" (OuterVolumeSpecName: "kube-api-access-mqhts") pod "0bd3393f-7bbf-4a54-a45c-5f206912dd1d" (UID: "0bd3393f-7bbf-4a54-a45c-5f206912dd1d"). InnerVolumeSpecName "kube-api-access-mqhts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:15:03 crc kubenswrapper[4832]: I1204 06:15:03.626602 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0bd3393f-7bbf-4a54-a45c-5f206912dd1d" (UID: "0bd3393f-7bbf-4a54-a45c-5f206912dd1d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:15:03 crc kubenswrapper[4832]: I1204 06:15:03.718162 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqhts\" (UniqueName: \"kubernetes.io/projected/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-kube-api-access-mqhts\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:03 crc kubenswrapper[4832]: I1204 06:15:03.718216 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0bd3393f-7bbf-4a54-a45c-5f206912dd1d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.158833 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ns5t9"] Dec 04 06:15:04 crc kubenswrapper[4832]: E1204 06:15:04.159125 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd3393f-7bbf-4a54-a45c-5f206912dd1d" containerName="collect-profiles" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.159141 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd3393f-7bbf-4a54-a45c-5f206912dd1d" containerName="collect-profiles" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.159249 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd3393f-7bbf-4a54-a45c-5f206912dd1d" containerName="collect-profiles" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.159767 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.178293 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ns5t9"] Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.204509 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" event={"ID":"0bd3393f-7bbf-4a54-a45c-5f206912dd1d","Type":"ContainerDied","Data":"eb3d5f7924dae14ee26cb2ecc721c190a70232722b1bcb6fbd30e612321091ca"} Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.204576 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb3d5f7924dae14ee26cb2ecc721c190a70232722b1bcb6fbd30e612321091ca" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.204571 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.224648 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-registry-tls\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.224694 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.224736 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.224855 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dmhw\" (UniqueName: \"kubernetes.io/projected/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-kube-api-access-5dmhw\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.224990 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-trusted-ca\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.225226 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.225272 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-bound-sa-token\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.225323 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-registry-certificates\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.249666 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.326818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-trusted-ca\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.326994 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.327026 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-bound-sa-token\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.327055 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-registry-certificates\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.327113 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-registry-tls\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.327138 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.327171 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dmhw\" (UniqueName: \"kubernetes.io/projected/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-kube-api-access-5dmhw\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.328127 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.328795 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-registry-certificates\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.329116 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-trusted-ca\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.334725 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.334753 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-registry-tls\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.346617 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-bound-sa-token\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.347426 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dmhw\" (UniqueName: \"kubernetes.io/projected/b8f957c3-64c9-4cf3-9b63-85b9e83a5587-kube-api-access-5dmhw\") pod \"image-registry-66df7c8f76-ns5t9\" (UID: \"b8f957c3-64c9-4cf3-9b63-85b9e83a5587\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.476561 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:04 crc kubenswrapper[4832]: I1204 06:15:04.897482 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ns5t9"] Dec 04 06:15:04 crc kubenswrapper[4832]: W1204 06:15:04.913106 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8f957c3_64c9_4cf3_9b63_85b9e83a5587.slice/crio-d2c04515e44d41265e0535fb7161e61fbb451ceba3436f9ad3181850d95b2254 WatchSource:0}: Error finding container d2c04515e44d41265e0535fb7161e61fbb451ceba3436f9ad3181850d95b2254: Status 404 returned error can't find the container with id d2c04515e44d41265e0535fb7161e61fbb451ceba3436f9ad3181850d95b2254 Dec 04 06:15:05 crc kubenswrapper[4832]: I1204 06:15:05.212222 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" event={"ID":"b8f957c3-64c9-4cf3-9b63-85b9e83a5587","Type":"ContainerStarted","Data":"f4df88b6eeef5198366ae5cf423a88442decae695796327c50c5620254156e5e"} Dec 04 06:15:05 crc kubenswrapper[4832]: I1204 06:15:05.212286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" event={"ID":"b8f957c3-64c9-4cf3-9b63-85b9e83a5587","Type":"ContainerStarted","Data":"d2c04515e44d41265e0535fb7161e61fbb451ceba3436f9ad3181850d95b2254"} Dec 04 06:15:05 crc kubenswrapper[4832]: I1204 06:15:05.212613 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:05 crc kubenswrapper[4832]: I1204 06:15:05.233050 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" podStartSLOduration=1.233028813 podStartE2EDuration="1.233028813s" podCreationTimestamp="2025-12-04 06:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:15:05.231794321 +0000 UTC m=+360.844612037" watchObservedRunningTime="2025-12-04 06:15:05.233028813 +0000 UTC m=+360.845846519" Dec 04 06:15:05 crc kubenswrapper[4832]: I1204 06:15:05.363300 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:15:05 crc kubenswrapper[4832]: I1204 06:15:05.363426 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:15:17 crc kubenswrapper[4832]: I1204 06:15:17.520851 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77f667dfdd-l6dvk"] Dec 04 06:15:17 crc kubenswrapper[4832]: I1204 06:15:17.521746 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" podUID="48be07b2-82ab-4a09-845c-1bd2c4556919" containerName="controller-manager" containerID="cri-o://2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f" gracePeriod=30 Dec 04 06:15:17 crc kubenswrapper[4832]: I1204 06:15:17.931887 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.118887 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-proxy-ca-bundles\") pod \"48be07b2-82ab-4a09-845c-1bd2c4556919\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.118959 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-config\") pod \"48be07b2-82ab-4a09-845c-1bd2c4556919\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.118985 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjlmg\" (UniqueName: \"kubernetes.io/projected/48be07b2-82ab-4a09-845c-1bd2c4556919-kube-api-access-cjlmg\") pod \"48be07b2-82ab-4a09-845c-1bd2c4556919\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.119012 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48be07b2-82ab-4a09-845c-1bd2c4556919-serving-cert\") pod \"48be07b2-82ab-4a09-845c-1bd2c4556919\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.119048 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-client-ca\") pod \"48be07b2-82ab-4a09-845c-1bd2c4556919\" (UID: \"48be07b2-82ab-4a09-845c-1bd2c4556919\") " Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.119751 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-client-ca" (OuterVolumeSpecName: "client-ca") pod "48be07b2-82ab-4a09-845c-1bd2c4556919" (UID: "48be07b2-82ab-4a09-845c-1bd2c4556919"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.119867 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-config" (OuterVolumeSpecName: "config") pod "48be07b2-82ab-4a09-845c-1bd2c4556919" (UID: "48be07b2-82ab-4a09-845c-1bd2c4556919"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.119946 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "48be07b2-82ab-4a09-845c-1bd2c4556919" (UID: "48be07b2-82ab-4a09-845c-1bd2c4556919"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.124742 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48be07b2-82ab-4a09-845c-1bd2c4556919-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "48be07b2-82ab-4a09-845c-1bd2c4556919" (UID: "48be07b2-82ab-4a09-845c-1bd2c4556919"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.130784 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48be07b2-82ab-4a09-845c-1bd2c4556919-kube-api-access-cjlmg" (OuterVolumeSpecName: "kube-api-access-cjlmg") pod "48be07b2-82ab-4a09-845c-1bd2c4556919" (UID: "48be07b2-82ab-4a09-845c-1bd2c4556919"). InnerVolumeSpecName "kube-api-access-cjlmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.220648 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.220750 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.220764 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48be07b2-82ab-4a09-845c-1bd2c4556919-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.220775 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjlmg\" (UniqueName: \"kubernetes.io/projected/48be07b2-82ab-4a09-845c-1bd2c4556919-kube-api-access-cjlmg\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.220811 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48be07b2-82ab-4a09-845c-1bd2c4556919-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.295919 4832 generic.go:334] "Generic (PLEG): container finished" podID="48be07b2-82ab-4a09-845c-1bd2c4556919" containerID="2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f" exitCode=0 Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.295965 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" event={"ID":"48be07b2-82ab-4a09-845c-1bd2c4556919","Type":"ContainerDied","Data":"2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f"} Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.295985 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.296009 4832 scope.go:117] "RemoveContainer" containerID="2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.295997 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" event={"ID":"48be07b2-82ab-4a09-845c-1bd2c4556919","Type":"ContainerDied","Data":"c5ec623f4e691a98f4502582200c7a3f6f21b0ca94c5b6311bb799de46fdaca3"} Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.753258 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-695974c5b4-gxh22"] Dec 04 06:15:18 crc kubenswrapper[4832]: E1204 06:15:18.754022 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48be07b2-82ab-4a09-845c-1bd2c4556919" containerName="controller-manager" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.754041 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="48be07b2-82ab-4a09-845c-1bd2c4556919" containerName="controller-manager" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.754219 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="48be07b2-82ab-4a09-845c-1bd2c4556919" containerName="controller-manager" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.754956 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.757140 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.757657 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.758610 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.760660 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.760889 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.761038 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.766705 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-695974c5b4-gxh22"] Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.767413 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.932969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b5c29a-e63d-45bd-a8a2-8033f2684fed-serving-cert\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.933108 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15b5c29a-e63d-45bd-a8a2-8033f2684fed-client-ca\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.933140 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15b5c29a-e63d-45bd-a8a2-8033f2684fed-proxy-ca-bundles\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.933283 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27nrl\" (UniqueName: \"kubernetes.io/projected/15b5c29a-e63d-45bd-a8a2-8033f2684fed-kube-api-access-27nrl\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:18 crc kubenswrapper[4832]: I1204 06:15:18.933540 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b5c29a-e63d-45bd-a8a2-8033f2684fed-config\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.047614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15b5c29a-e63d-45bd-a8a2-8033f2684fed-client-ca\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.047710 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15b5c29a-e63d-45bd-a8a2-8033f2684fed-proxy-ca-bundles\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.047744 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27nrl\" (UniqueName: \"kubernetes.io/projected/15b5c29a-e63d-45bd-a8a2-8033f2684fed-kube-api-access-27nrl\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.047808 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b5c29a-e63d-45bd-a8a2-8033f2684fed-config\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.047850 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b5c29a-e63d-45bd-a8a2-8033f2684fed-serving-cert\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.050013 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15b5c29a-e63d-45bd-a8a2-8033f2684fed-client-ca\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.050093 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15b5c29a-e63d-45bd-a8a2-8033f2684fed-proxy-ca-bundles\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.050535 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b5c29a-e63d-45bd-a8a2-8033f2684fed-config\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.059273 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b5c29a-e63d-45bd-a8a2-8033f2684fed-serving-cert\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.071310 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27nrl\" (UniqueName: \"kubernetes.io/projected/15b5c29a-e63d-45bd-a8a2-8033f2684fed-kube-api-access-27nrl\") pod \"controller-manager-695974c5b4-gxh22\" (UID: \"15b5c29a-e63d-45bd-a8a2-8033f2684fed\") " pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.264454 4832 scope.go:117] "RemoveContainer" containerID="2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f" Dec 04 06:15:19 crc kubenswrapper[4832]: E1204 06:15:19.265223 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f\": container with ID starting with 2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f not found: ID does not exist" containerID="2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.265290 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f"} err="failed to get container status \"2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f\": rpc error: code = NotFound desc = could not find container \"2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f\": container with ID starting with 2566eeb238e7557be37a2619540064e264917f563bfee93905881a53524ea62f not found: ID does not exist" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.282853 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:19 crc kubenswrapper[4832]: I1204 06:15:19.477904 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-695974c5b4-gxh22"] Dec 04 06:15:20 crc kubenswrapper[4832]: I1204 06:15:20.316178 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" event={"ID":"15b5c29a-e63d-45bd-a8a2-8033f2684fed","Type":"ContainerStarted","Data":"8cf0c9a36d6f99946762230a488a11ea3e502c5e5eabbb2a6d8ca75f5d1d4d18"} Dec 04 06:15:20 crc kubenswrapper[4832]: I1204 06:15:20.316617 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" event={"ID":"15b5c29a-e63d-45bd-a8a2-8033f2684fed","Type":"ContainerStarted","Data":"8df1e945d5f103b330ca8e3580733fbbef32b3b6364974831d82ccebe776d82a"} Dec 04 06:15:20 crc kubenswrapper[4832]: I1204 06:15:20.316646 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:20 crc kubenswrapper[4832]: I1204 06:15:20.322088 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" Dec 04 06:15:20 crc kubenswrapper[4832]: I1204 06:15:20.351846 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-695974c5b4-gxh22" podStartSLOduration=3.351826946 podStartE2EDuration="3.351826946s" podCreationTimestamp="2025-12-04 06:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:15:20.335725273 +0000 UTC m=+375.948542989" watchObservedRunningTime="2025-12-04 06:15:20.351826946 +0000 UTC m=+375.964644652" Dec 04 06:15:24 crc kubenswrapper[4832]: I1204 06:15:24.495034 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-ns5t9" Dec 04 06:15:24 crc kubenswrapper[4832]: I1204 06:15:24.604026 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9chqb"] Dec 04 06:15:35 crc kubenswrapper[4832]: I1204 06:15:35.363136 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:15:35 crc kubenswrapper[4832]: I1204 06:15:35.363842 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:15:35 crc kubenswrapper[4832]: I1204 06:15:35.363893 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:15:35 crc kubenswrapper[4832]: I1204 06:15:35.364518 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e74b17e6dc40ea52d596a660c11c2fce066900038d8d935b5047def34efc0e45"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 06:15:35 crc kubenswrapper[4832]: I1204 06:15:35.364583 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://e74b17e6dc40ea52d596a660c11c2fce066900038d8d935b5047def34efc0e45" gracePeriod=600 Dec 04 06:15:36 crc kubenswrapper[4832]: I1204 06:15:36.402547 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="e74b17e6dc40ea52d596a660c11c2fce066900038d8d935b5047def34efc0e45" exitCode=0 Dec 04 06:15:36 crc kubenswrapper[4832]: I1204 06:15:36.402613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"e74b17e6dc40ea52d596a660c11c2fce066900038d8d935b5047def34efc0e45"} Dec 04 06:15:36 crc kubenswrapper[4832]: I1204 06:15:36.403182 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"a310df5aafadcc8efe4afcfdedff7303ce96555ee4dee978bd0e572554bab684"} Dec 04 06:15:36 crc kubenswrapper[4832]: I1204 06:15:36.403213 4832 scope.go:117] "RemoveContainer" containerID="67e94ea55b68d6f7ddcae1da15d2199980662b800d563b8ab333e6dd0c5503f2" Dec 04 06:15:49 crc kubenswrapper[4832]: I1204 06:15:49.237588 4832 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod48be07b2-82ab-4a09-845c-1bd2c4556919"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod48be07b2-82ab-4a09-845c-1bd2c4556919] : Timed out while waiting for systemd to remove kubepods-burstable-pod48be07b2_82ab_4a09_845c_1bd2c4556919.slice" Dec 04 06:15:49 crc kubenswrapper[4832]: E1204 06:15:49.238174 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod48be07b2-82ab-4a09-845c-1bd2c4556919] : unable to destroy cgroup paths for cgroup [kubepods burstable pod48be07b2-82ab-4a09-845c-1bd2c4556919] : Timed out while waiting for systemd to remove kubepods-burstable-pod48be07b2_82ab_4a09_845c_1bd2c4556919.slice" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" podUID="48be07b2-82ab-4a09-845c-1bd2c4556919" Dec 04 06:15:49 crc kubenswrapper[4832]: I1204 06:15:49.481053 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f667dfdd-l6dvk" Dec 04 06:15:49 crc kubenswrapper[4832]: I1204 06:15:49.501714 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77f667dfdd-l6dvk"] Dec 04 06:15:49 crc kubenswrapper[4832]: I1204 06:15:49.506109 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77f667dfdd-l6dvk"] Dec 04 06:15:49 crc kubenswrapper[4832]: I1204 06:15:49.653918 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" podUID="d9f05718-aaf5-41f3-94b2-026b8eb39474" containerName="registry" containerID="cri-o://0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71" gracePeriod=30 Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.084805 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.214719 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d9f05718-aaf5-41f3-94b2-026b8eb39474\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.214892 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9f05718-aaf5-41f3-94b2-026b8eb39474-registry-certificates\") pod \"d9f05718-aaf5-41f3-94b2-026b8eb39474\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.214930 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9f05718-aaf5-41f3-94b2-026b8eb39474-installation-pull-secrets\") pod \"d9f05718-aaf5-41f3-94b2-026b8eb39474\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.215011 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9f05718-aaf5-41f3-94b2-026b8eb39474-ca-trust-extracted\") pod \"d9f05718-aaf5-41f3-94b2-026b8eb39474\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.215068 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-bound-sa-token\") pod \"d9f05718-aaf5-41f3-94b2-026b8eb39474\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.215099 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9r4g\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-kube-api-access-h9r4g\") pod \"d9f05718-aaf5-41f3-94b2-026b8eb39474\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.215162 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9f05718-aaf5-41f3-94b2-026b8eb39474-trusted-ca\") pod \"d9f05718-aaf5-41f3-94b2-026b8eb39474\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.215194 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-registry-tls\") pod \"d9f05718-aaf5-41f3-94b2-026b8eb39474\" (UID: \"d9f05718-aaf5-41f3-94b2-026b8eb39474\") " Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.217163 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9f05718-aaf5-41f3-94b2-026b8eb39474-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d9f05718-aaf5-41f3-94b2-026b8eb39474" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.217277 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9f05718-aaf5-41f3-94b2-026b8eb39474-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d9f05718-aaf5-41f3-94b2-026b8eb39474" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.225549 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d9f05718-aaf5-41f3-94b2-026b8eb39474" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.225589 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f05718-aaf5-41f3-94b2-026b8eb39474-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d9f05718-aaf5-41f3-94b2-026b8eb39474" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.226224 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-kube-api-access-h9r4g" (OuterVolumeSpecName: "kube-api-access-h9r4g") pod "d9f05718-aaf5-41f3-94b2-026b8eb39474" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474"). InnerVolumeSpecName "kube-api-access-h9r4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.227159 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d9f05718-aaf5-41f3-94b2-026b8eb39474" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.235403 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d9f05718-aaf5-41f3-94b2-026b8eb39474" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.236241 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9f05718-aaf5-41f3-94b2-026b8eb39474-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d9f05718-aaf5-41f3-94b2-026b8eb39474" (UID: "d9f05718-aaf5-41f3-94b2-026b8eb39474"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.317096 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.317133 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9r4g\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-kube-api-access-h9r4g\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.317144 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9f05718-aaf5-41f3-94b2-026b8eb39474-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.317153 4832 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9f05718-aaf5-41f3-94b2-026b8eb39474-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.317162 4832 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9f05718-aaf5-41f3-94b2-026b8eb39474-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.317173 4832 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9f05718-aaf5-41f3-94b2-026b8eb39474-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.317184 4832 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9f05718-aaf5-41f3-94b2-026b8eb39474-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.487738 4832 generic.go:334] "Generic (PLEG): container finished" podID="d9f05718-aaf5-41f3-94b2-026b8eb39474" containerID="0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71" exitCode=0 Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.487784 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" event={"ID":"d9f05718-aaf5-41f3-94b2-026b8eb39474","Type":"ContainerDied","Data":"0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71"} Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.487824 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" event={"ID":"d9f05718-aaf5-41f3-94b2-026b8eb39474","Type":"ContainerDied","Data":"1fe678d65024cd4f18f476e40d7cf4127e1f1bff9c5903deecf2e20ddfab855d"} Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.487842 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9chqb" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.487848 4832 scope.go:117] "RemoveContainer" containerID="0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.514426 4832 scope.go:117] "RemoveContainer" containerID="0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71" Dec 04 06:15:50 crc kubenswrapper[4832]: E1204 06:15:50.515509 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71\": container with ID starting with 0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71 not found: ID does not exist" containerID="0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.515563 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71"} err="failed to get container status \"0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71\": rpc error: code = NotFound desc = could not find container \"0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71\": container with ID starting with 0030eadbea2d69aa635d839a48823f451cd4aa31ab726da596b9787e09fc6a71 not found: ID does not exist" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.523079 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9chqb"] Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.526509 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9chqb"] Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.720091 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48be07b2-82ab-4a09-845c-1bd2c4556919" path="/var/lib/kubelet/pods/48be07b2-82ab-4a09-845c-1bd2c4556919/volumes" Dec 04 06:15:50 crc kubenswrapper[4832]: I1204 06:15:50.721632 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f05718-aaf5-41f3-94b2-026b8eb39474" path="/var/lib/kubelet/pods/d9f05718-aaf5-41f3-94b2-026b8eb39474/volumes" Dec 04 06:17:35 crc kubenswrapper[4832]: I1204 06:17:35.362534 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:17:35 crc kubenswrapper[4832]: I1204 06:17:35.363222 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:18:04 crc kubenswrapper[4832]: I1204 06:18:04.899539 4832 scope.go:117] "RemoveContainer" containerID="e656a0fcac1abced1e4dbf2854fe2aaaac77aa942fa07ae743b6ea15adb484eb" Dec 04 06:18:05 crc kubenswrapper[4832]: I1204 06:18:05.362539 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:18:05 crc kubenswrapper[4832]: I1204 06:18:05.362620 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:18:35 crc kubenswrapper[4832]: I1204 06:18:35.362175 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:18:35 crc kubenswrapper[4832]: I1204 06:18:35.362897 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:18:35 crc kubenswrapper[4832]: I1204 06:18:35.362966 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:18:35 crc kubenswrapper[4832]: I1204 06:18:35.363843 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a310df5aafadcc8efe4afcfdedff7303ce96555ee4dee978bd0e572554bab684"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 06:18:35 crc kubenswrapper[4832]: I1204 06:18:35.363919 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://a310df5aafadcc8efe4afcfdedff7303ce96555ee4dee978bd0e572554bab684" gracePeriod=600 Dec 04 06:18:36 crc kubenswrapper[4832]: I1204 06:18:36.487357 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="a310df5aafadcc8efe4afcfdedff7303ce96555ee4dee978bd0e572554bab684" exitCode=0 Dec 04 06:18:36 crc kubenswrapper[4832]: I1204 06:18:36.487421 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"a310df5aafadcc8efe4afcfdedff7303ce96555ee4dee978bd0e572554bab684"} Dec 04 06:18:36 crc kubenswrapper[4832]: I1204 06:18:36.488001 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"469128422ffdf7c9a116b1453571faa4112e83e21b46cb276494efc9be588617"} Dec 04 06:18:36 crc kubenswrapper[4832]: I1204 06:18:36.488120 4832 scope.go:117] "RemoveContainer" containerID="e74b17e6dc40ea52d596a660c11c2fce066900038d8d935b5047def34efc0e45" Dec 04 06:20:35 crc kubenswrapper[4832]: I1204 06:20:35.362133 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:20:35 crc kubenswrapper[4832]: I1204 06:20:35.363037 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.786011 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8smzg"] Dec 04 06:20:45 crc kubenswrapper[4832]: E1204 06:20:45.786762 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f05718-aaf5-41f3-94b2-026b8eb39474" containerName="registry" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.786774 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f05718-aaf5-41f3-94b2-026b8eb39474" containerName="registry" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.786867 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f05718-aaf5-41f3-94b2-026b8eb39474" containerName="registry" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.787273 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-8smzg" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.789986 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.790249 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.790460 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-gt696" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.800748 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7cv2p"] Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.803596 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7cv2p" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.805543 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-mgrzc" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.815503 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-jkfns"] Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.816310 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-jkfns" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.817857 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-8l6q8" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.819908 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7cv2p"] Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.835585 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-jkfns"] Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.846935 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8smzg"] Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.889293 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68fwd\" (UniqueName: \"kubernetes.io/projected/ce0aa020-53b7-4687-b620-659e270dbcc3-kube-api-access-68fwd\") pod \"cert-manager-cainjector-7f985d654d-8smzg\" (UID: \"ce0aa020-53b7-4687-b620-659e270dbcc3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8smzg" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.990969 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68fwd\" (UniqueName: \"kubernetes.io/projected/ce0aa020-53b7-4687-b620-659e270dbcc3-kube-api-access-68fwd\") pod \"cert-manager-cainjector-7f985d654d-8smzg\" (UID: \"ce0aa020-53b7-4687-b620-659e270dbcc3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8smzg" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.991065 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8wnc\" (UniqueName: \"kubernetes.io/projected/801084d1-2568-40d3-b9a1-3f3d43cecdea-kube-api-access-g8wnc\") pod \"cert-manager-webhook-5655c58dd6-jkfns\" (UID: \"801084d1-2568-40d3-b9a1-3f3d43cecdea\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-jkfns" Dec 04 06:20:45 crc kubenswrapper[4832]: I1204 06:20:45.991180 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4kgf\" (UniqueName: \"kubernetes.io/projected/982879a5-56a8-46a1-ac5f-73023f9a1ddc-kube-api-access-d4kgf\") pod \"cert-manager-5b446d88c5-7cv2p\" (UID: \"982879a5-56a8-46a1-ac5f-73023f9a1ddc\") " pod="cert-manager/cert-manager-5b446d88c5-7cv2p" Dec 04 06:20:46 crc kubenswrapper[4832]: I1204 06:20:46.019293 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68fwd\" (UniqueName: \"kubernetes.io/projected/ce0aa020-53b7-4687-b620-659e270dbcc3-kube-api-access-68fwd\") pod \"cert-manager-cainjector-7f985d654d-8smzg\" (UID: \"ce0aa020-53b7-4687-b620-659e270dbcc3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8smzg" Dec 04 06:20:46 crc kubenswrapper[4832]: I1204 06:20:46.092705 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4kgf\" (UniqueName: \"kubernetes.io/projected/982879a5-56a8-46a1-ac5f-73023f9a1ddc-kube-api-access-d4kgf\") pod \"cert-manager-5b446d88c5-7cv2p\" (UID: \"982879a5-56a8-46a1-ac5f-73023f9a1ddc\") " pod="cert-manager/cert-manager-5b446d88c5-7cv2p" Dec 04 06:20:46 crc kubenswrapper[4832]: I1204 06:20:46.092773 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8wnc\" (UniqueName: \"kubernetes.io/projected/801084d1-2568-40d3-b9a1-3f3d43cecdea-kube-api-access-g8wnc\") pod \"cert-manager-webhook-5655c58dd6-jkfns\" (UID: \"801084d1-2568-40d3-b9a1-3f3d43cecdea\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-jkfns" Dec 04 06:20:46 crc kubenswrapper[4832]: I1204 06:20:46.107665 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-8smzg" Dec 04 06:20:46 crc kubenswrapper[4832]: I1204 06:20:46.113567 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4kgf\" (UniqueName: \"kubernetes.io/projected/982879a5-56a8-46a1-ac5f-73023f9a1ddc-kube-api-access-d4kgf\") pod \"cert-manager-5b446d88c5-7cv2p\" (UID: \"982879a5-56a8-46a1-ac5f-73023f9a1ddc\") " pod="cert-manager/cert-manager-5b446d88c5-7cv2p" Dec 04 06:20:46 crc kubenswrapper[4832]: I1204 06:20:46.116746 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8wnc\" (UniqueName: \"kubernetes.io/projected/801084d1-2568-40d3-b9a1-3f3d43cecdea-kube-api-access-g8wnc\") pod \"cert-manager-webhook-5655c58dd6-jkfns\" (UID: \"801084d1-2568-40d3-b9a1-3f3d43cecdea\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-jkfns" Dec 04 06:20:46 crc kubenswrapper[4832]: I1204 06:20:46.119130 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7cv2p" Dec 04 06:20:46 crc kubenswrapper[4832]: I1204 06:20:46.132823 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-jkfns" Dec 04 06:20:46 crc kubenswrapper[4832]: I1204 06:20:46.360265 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7cv2p"] Dec 04 06:20:46 crc kubenswrapper[4832]: I1204 06:20:46.369172 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 06:20:46 crc kubenswrapper[4832]: I1204 06:20:46.440436 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-jkfns"] Dec 04 06:20:46 crc kubenswrapper[4832]: W1204 06:20:46.445359 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod801084d1_2568_40d3_b9a1_3f3d43cecdea.slice/crio-29e532c5be4f7bde5c4064613fae2752037915618d56b7c59d54da110f093c7a WatchSource:0}: Error finding container 29e532c5be4f7bde5c4064613fae2752037915618d56b7c59d54da110f093c7a: Status 404 returned error can't find the container with id 29e532c5be4f7bde5c4064613fae2752037915618d56b7c59d54da110f093c7a Dec 04 06:20:46 crc kubenswrapper[4832]: I1204 06:20:46.608045 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8smzg"] Dec 04 06:20:46 crc kubenswrapper[4832]: W1204 06:20:46.611805 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce0aa020_53b7_4687_b620_659e270dbcc3.slice/crio-b222cd4109104e95b157f24e813ec8bc8df3a3ffcbf60b7a65ae53660485682e WatchSource:0}: Error finding container b222cd4109104e95b157f24e813ec8bc8df3a3ffcbf60b7a65ae53660485682e: Status 404 returned error can't find the container with id b222cd4109104e95b157f24e813ec8bc8df3a3ffcbf60b7a65ae53660485682e Dec 04 06:20:47 crc kubenswrapper[4832]: I1204 06:20:47.236591 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7cv2p" event={"ID":"982879a5-56a8-46a1-ac5f-73023f9a1ddc","Type":"ContainerStarted","Data":"a2f65c0289887c5a1b981b52feba59dc406a18afa931c7e8cc2715858fc32522"} Dec 04 06:20:47 crc kubenswrapper[4832]: I1204 06:20:47.238829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-jkfns" event={"ID":"801084d1-2568-40d3-b9a1-3f3d43cecdea","Type":"ContainerStarted","Data":"29e532c5be4f7bde5c4064613fae2752037915618d56b7c59d54da110f093c7a"} Dec 04 06:20:47 crc kubenswrapper[4832]: I1204 06:20:47.241060 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-8smzg" event={"ID":"ce0aa020-53b7-4687-b620-659e270dbcc3","Type":"ContainerStarted","Data":"b222cd4109104e95b157f24e813ec8bc8df3a3ffcbf60b7a65ae53660485682e"} Dec 04 06:20:49 crc kubenswrapper[4832]: I1204 06:20:49.254202 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-jkfns" event={"ID":"801084d1-2568-40d3-b9a1-3f3d43cecdea","Type":"ContainerStarted","Data":"40f4a74d53ffe2005e2392cf4658ed5a1f9b46b4b70ddd6c4a3af1c69d44bb8b"} Dec 04 06:20:49 crc kubenswrapper[4832]: I1204 06:20:49.254983 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-jkfns" Dec 04 06:20:49 crc kubenswrapper[4832]: I1204 06:20:49.257304 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7cv2p" event={"ID":"982879a5-56a8-46a1-ac5f-73023f9a1ddc","Type":"ContainerStarted","Data":"7d3d16fb84a2e4117067020905ec56fe5307fe9aa6bedcf1545ff70df889a40c"} Dec 04 06:20:49 crc kubenswrapper[4832]: I1204 06:20:49.273974 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-jkfns" podStartSLOduration=1.944831696 podStartE2EDuration="4.273956744s" podCreationTimestamp="2025-12-04 06:20:45 +0000 UTC" firstStartedPulling="2025-12-04 06:20:46.447641726 +0000 UTC m=+702.060459422" lastFinishedPulling="2025-12-04 06:20:48.776766754 +0000 UTC m=+704.389584470" observedRunningTime="2025-12-04 06:20:49.268128143 +0000 UTC m=+704.880945869" watchObservedRunningTime="2025-12-04 06:20:49.273956744 +0000 UTC m=+704.886774450" Dec 04 06:20:49 crc kubenswrapper[4832]: I1204 06:20:49.282956 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-7cv2p" podStartSLOduration=1.919362019 podStartE2EDuration="4.282936032s" podCreationTimestamp="2025-12-04 06:20:45 +0000 UTC" firstStartedPulling="2025-12-04 06:20:46.368960808 +0000 UTC m=+701.981778514" lastFinishedPulling="2025-12-04 06:20:48.732534821 +0000 UTC m=+704.345352527" observedRunningTime="2025-12-04 06:20:49.282337738 +0000 UTC m=+704.895155454" watchObservedRunningTime="2025-12-04 06:20:49.282936032 +0000 UTC m=+704.895753748" Dec 04 06:20:50 crc kubenswrapper[4832]: I1204 06:20:50.349608 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-8smzg" event={"ID":"ce0aa020-53b7-4687-b620-659e270dbcc3","Type":"ContainerStarted","Data":"c57c5858c58bea6dbc442eff8c0b2132406fdc2e540750eb0df53e99febf95c0"} Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.136312 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-jkfns" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.162824 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-8smzg" podStartSLOduration=8.213924416 podStartE2EDuration="11.162797856s" podCreationTimestamp="2025-12-04 06:20:45 +0000 UTC" firstStartedPulling="2025-12-04 06:20:46.614423652 +0000 UTC m=+702.227241368" lastFinishedPulling="2025-12-04 06:20:49.563297102 +0000 UTC m=+705.176114808" observedRunningTime="2025-12-04 06:20:50.373722301 +0000 UTC m=+705.986540047" watchObservedRunningTime="2025-12-04 06:20:56.162797856 +0000 UTC m=+711.775615582" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.309046 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zdmhj"] Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.309449 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovn-controller" containerID="cri-o://7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef" gracePeriod=30 Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.309738 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="nbdb" containerID="cri-o://3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae" gracePeriod=30 Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.309813 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="kube-rbac-proxy-node" containerID="cri-o://5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169" gracePeriod=30 Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.309807 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="sbdb" containerID="cri-o://2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4" gracePeriod=30 Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.309854 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovn-acl-logging" containerID="cri-o://071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b" gracePeriod=30 Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.309788 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="northd" containerID="cri-o://12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3" gracePeriod=30 Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.309804 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a" gracePeriod=30 Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.339433 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" containerID="cri-o://fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7" gracePeriod=30 Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.572465 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/3.log" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.574691 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovn-acl-logging/0.log" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.575167 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovn-controller/0.log" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.575705 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630564 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9bcbd"] Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.630772 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630783 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.630792 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovn-acl-logging" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630799 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovn-acl-logging" Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.630805 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="nbdb" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630812 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="nbdb" Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.630823 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="kube-rbac-proxy-node" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630828 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="kube-rbac-proxy-node" Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.630838 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630844 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.630850 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="kube-rbac-proxy-ovn-metrics" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630856 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="kube-rbac-proxy-ovn-metrics" Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.630866 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630872 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.630879 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630885 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.630893 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovn-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630901 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovn-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.630909 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="sbdb" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630915 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="sbdb" Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.630923 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="northd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630928 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="northd" Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.630937 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="kubecfg-setup" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.630943 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="kubecfg-setup" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.631035 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.631045 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="kube-rbac-proxy-ovn-metrics" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.631051 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="nbdb" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.631059 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="northd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.631065 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovn-acl-logging" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.631075 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.631082 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="kube-rbac-proxy-node" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.631089 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovn-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.631099 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="sbdb" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.631105 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: E1204 06:20:56.631187 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.631193 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.631670 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.632093 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerName="ovnkube-controller" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.633817 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640269 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640406 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640420 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-etc-openvswitch\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640490 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-slash\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640525 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-systemd-units\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640554 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovn-node-metrics-cert\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640615 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-env-overrides\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640632 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-ovn\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640631 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-slash" (OuterVolumeSpecName: "host-slash") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640652 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-node-log\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640670 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640682 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-openvswitch\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640693 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-node-log" (OuterVolumeSpecName: "node-log") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640704 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-kubelet\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640764 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640760 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640786 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-run-netns\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640755 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640820 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-cni-netd\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640802 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovnkube-config\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640876 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-log-socket\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640908 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-var-lib-openvswitch\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640930 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-log-socket" (OuterVolumeSpecName: "log-socket") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640946 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-cni-bin\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640960 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640965 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-run-ovn-kubernetes\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641026 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovnkube-script-lib\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641048 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-systemd\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640940 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640984 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641069 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwds7\" (UniqueName: \"kubernetes.io/projected/c442d280-de5c-4240-90b3-af48bbb2f1c5-kube-api-access-cwds7\") pod \"c442d280-de5c-4240-90b3-af48bbb2f1c5\" (UID: \"c442d280-de5c-4240-90b3-af48bbb2f1c5\") " Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641297 4832 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641315 4832 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641324 4832 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641332 4832 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641339 4832 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-log-socket\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641347 4832 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641355 4832 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641366 4832 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.640995 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641414 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641374 4832 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-slash\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641153 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641298 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641459 4832 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641473 4832 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641483 4832 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-node-log\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.641796 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.647166 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.647714 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c442d280-de5c-4240-90b3-af48bbb2f1c5-kube-api-access-cwds7" (OuterVolumeSpecName: "kube-api-access-cwds7") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "kube-api-access-cwds7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.658926 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c442d280-de5c-4240-90b3-af48bbb2f1c5" (UID: "c442d280-de5c-4240-90b3-af48bbb2f1c5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.742855 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-slash\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.742906 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-node-log\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.742926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-log-socket\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.742946 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-cni-bin\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.742963 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-var-lib-openvswitch\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.742980 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-ovnkube-script-lib\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.742999 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-run-openvswitch\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743063 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-run-ovn-kubernetes\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743151 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743186 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-kubelet\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743214 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-run-netns\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743243 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-ovnkube-config\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743267 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-etc-openvswitch\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743293 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-cni-netd\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743318 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-run-ovn\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743348 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vjjg\" (UniqueName: \"kubernetes.io/projected/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-kube-api-access-8vjjg\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743474 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-run-systemd\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743522 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-env-overrides\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743547 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-systemd-units\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743635 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-ovn-node-metrics-cert\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743752 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743781 4832 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743794 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwds7\" (UniqueName: \"kubernetes.io/projected/c442d280-de5c-4240-90b3-af48bbb2f1c5-kube-api-access-cwds7\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743809 4832 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743824 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743837 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743852 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c442d280-de5c-4240-90b3-af48bbb2f1c5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.743865 4832 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c442d280-de5c-4240-90b3-af48bbb2f1c5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845415 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845467 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-kubelet\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845519 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-run-netns\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845541 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-ovnkube-config\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845564 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-etc-openvswitch\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845579 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845584 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-cni-netd\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845644 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-etc-openvswitch\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845656 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-kubelet\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845703 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-run-ovn\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845626 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-cni-netd\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845678 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-run-netns\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845680 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-run-ovn\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845822 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vjjg\" (UniqueName: \"kubernetes.io/projected/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-kube-api-access-8vjjg\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845877 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-run-systemd\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-env-overrides\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-run-systemd\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845938 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-systemd-units\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845962 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-systemd-units\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.845981 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-ovn-node-metrics-cert\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846002 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-slash\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846032 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-node-log\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846050 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-log-socket\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846072 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-cni-bin\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846092 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-var-lib-openvswitch\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846109 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-ovnkube-script-lib\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846126 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-run-openvswitch\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846143 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-run-ovn-kubernetes\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846188 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-run-ovn-kubernetes\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846189 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-log-socket\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846221 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-cni-bin\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-var-lib-openvswitch\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846299 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-ovnkube-config\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846337 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-host-slash\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846343 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-node-log\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846372 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-run-openvswitch\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846673 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-env-overrides\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.846828 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-ovnkube-script-lib\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.850042 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-ovn-node-metrics-cert\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.862121 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vjjg\" (UniqueName: \"kubernetes.io/projected/ee4adb5f-37ff-4ad0-95a7-844ca6d20a48-kube-api-access-8vjjg\") pod \"ovnkube-node-9bcbd\" (UID: \"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: I1204 06:20:56.946525 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:20:56 crc kubenswrapper[4832]: W1204 06:20:56.970816 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee4adb5f_37ff_4ad0_95a7_844ca6d20a48.slice/crio-01d3cf2ef1814f9b31e5e895b0131929b247c1c8feb1b5a90a7017a95d683eaa WatchSource:0}: Error finding container 01d3cf2ef1814f9b31e5e895b0131929b247c1c8feb1b5a90a7017a95d683eaa: Status 404 returned error can't find the container with id 01d3cf2ef1814f9b31e5e895b0131929b247c1c8feb1b5a90a7017a95d683eaa Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.390473 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovnkube-controller/3.log" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.393201 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovn-acl-logging/0.log" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.393781 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zdmhj_c442d280-de5c-4240-90b3-af48bbb2f1c5/ovn-controller/0.log" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394218 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7" exitCode=0 Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394251 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4" exitCode=0 Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394263 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae" exitCode=0 Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394273 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3" exitCode=0 Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394283 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a" exitCode=0 Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394283 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394328 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394342 4832 scope.go:117] "RemoveContainer" containerID="fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394294 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169" exitCode=0 Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394472 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b" exitCode=143 Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394488 4832 generic.go:334] "Generic (PLEG): container finished" podID="c442d280-de5c-4240-90b3-af48bbb2f1c5" containerID="7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef" exitCode=143 Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394328 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394656 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394672 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394684 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394697 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394709 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394723 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394731 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394739 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394746 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394754 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394761 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394769 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394776 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394786 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394798 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394807 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394814 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394823 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394830 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394839 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394846 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394853 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394860 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394866 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394887 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394908 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394918 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394928 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394937 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394947 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394956 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394965 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394974 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394983 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.394995 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zdmhj" event={"ID":"c442d280-de5c-4240-90b3-af48bbb2f1c5","Type":"ContainerDied","Data":"bf45605bf835942db20b8bd280dc8c984e3f4a06274b42404c007fd10d531089"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.395006 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.395015 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.395022 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.395029 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.395036 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.395044 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.395051 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.395058 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.395065 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.395072 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.398999 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nl9n_325cffd3-4d6a-4916-8ad9-743cdc486769/kube-multus/2.log" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.403971 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nl9n_325cffd3-4d6a-4916-8ad9-743cdc486769/kube-multus/1.log" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.404031 4832 generic.go:334] "Generic (PLEG): container finished" podID="325cffd3-4d6a-4916-8ad9-743cdc486769" containerID="884e91c9ce60008aa03c7bf5ca552038900a5fc445619ae1247a88ea68ff4873" exitCode=2 Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.404106 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nl9n" event={"ID":"325cffd3-4d6a-4916-8ad9-743cdc486769","Type":"ContainerDied","Data":"884e91c9ce60008aa03c7bf5ca552038900a5fc445619ae1247a88ea68ff4873"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.404139 4832 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.405243 4832 scope.go:117] "RemoveContainer" containerID="884e91c9ce60008aa03c7bf5ca552038900a5fc445619ae1247a88ea68ff4873" Dec 04 06:20:57 crc kubenswrapper[4832]: E1204 06:20:57.405498 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9nl9n_openshift-multus(325cffd3-4d6a-4916-8ad9-743cdc486769)\"" pod="openshift-multus/multus-9nl9n" podUID="325cffd3-4d6a-4916-8ad9-743cdc486769" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.406254 4832 generic.go:334] "Generic (PLEG): container finished" podID="ee4adb5f-37ff-4ad0-95a7-844ca6d20a48" containerID="3aa144d4441f77ef2f1e223443526b97df22f06241fdf4a28d8e14ff3e576cf7" exitCode=0 Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.406281 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" event={"ID":"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48","Type":"ContainerDied","Data":"3aa144d4441f77ef2f1e223443526b97df22f06241fdf4a28d8e14ff3e576cf7"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.406297 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" event={"ID":"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48","Type":"ContainerStarted","Data":"01d3cf2ef1814f9b31e5e895b0131929b247c1c8feb1b5a90a7017a95d683eaa"} Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.417826 4832 scope.go:117] "RemoveContainer" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.477114 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zdmhj"] Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.480656 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zdmhj"] Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.503973 4832 scope.go:117] "RemoveContainer" containerID="2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.533774 4832 scope.go:117] "RemoveContainer" containerID="3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.555142 4832 scope.go:117] "RemoveContainer" containerID="12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.567073 4832 scope.go:117] "RemoveContainer" containerID="0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.577148 4832 scope.go:117] "RemoveContainer" containerID="5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.595037 4832 scope.go:117] "RemoveContainer" containerID="071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.616298 4832 scope.go:117] "RemoveContainer" containerID="7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.655470 4832 scope.go:117] "RemoveContainer" containerID="69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.674823 4832 scope.go:117] "RemoveContainer" containerID="fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7" Dec 04 06:20:57 crc kubenswrapper[4832]: E1204 06:20:57.675268 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7\": container with ID starting with fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7 not found: ID does not exist" containerID="fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.675298 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7"} err="failed to get container status \"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7\": rpc error: code = NotFound desc = could not find container \"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7\": container with ID starting with fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.675320 4832 scope.go:117] "RemoveContainer" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" Dec 04 06:20:57 crc kubenswrapper[4832]: E1204 06:20:57.675661 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\": container with ID starting with d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159 not found: ID does not exist" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.675700 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159"} err="failed to get container status \"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\": rpc error: code = NotFound desc = could not find container \"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\": container with ID starting with d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.675737 4832 scope.go:117] "RemoveContainer" containerID="2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4" Dec 04 06:20:57 crc kubenswrapper[4832]: E1204 06:20:57.676012 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\": container with ID starting with 2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4 not found: ID does not exist" containerID="2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.676032 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4"} err="failed to get container status \"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\": rpc error: code = NotFound desc = could not find container \"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\": container with ID starting with 2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.676048 4832 scope.go:117] "RemoveContainer" containerID="3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae" Dec 04 06:20:57 crc kubenswrapper[4832]: E1204 06:20:57.676311 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\": container with ID starting with 3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae not found: ID does not exist" containerID="3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.676327 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae"} err="failed to get container status \"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\": rpc error: code = NotFound desc = could not find container \"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\": container with ID starting with 3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.676340 4832 scope.go:117] "RemoveContainer" containerID="12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3" Dec 04 06:20:57 crc kubenswrapper[4832]: E1204 06:20:57.676627 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\": container with ID starting with 12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3 not found: ID does not exist" containerID="12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.676665 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3"} err="failed to get container status \"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\": rpc error: code = NotFound desc = could not find container \"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\": container with ID starting with 12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.676693 4832 scope.go:117] "RemoveContainer" containerID="0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a" Dec 04 06:20:57 crc kubenswrapper[4832]: E1204 06:20:57.676991 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\": container with ID starting with 0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a not found: ID does not exist" containerID="0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.677022 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a"} err="failed to get container status \"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\": rpc error: code = NotFound desc = could not find container \"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\": container with ID starting with 0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.677040 4832 scope.go:117] "RemoveContainer" containerID="5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169" Dec 04 06:20:57 crc kubenswrapper[4832]: E1204 06:20:57.677303 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\": container with ID starting with 5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169 not found: ID does not exist" containerID="5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.677321 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169"} err="failed to get container status \"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\": rpc error: code = NotFound desc = could not find container \"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\": container with ID starting with 5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.677334 4832 scope.go:117] "RemoveContainer" containerID="071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b" Dec 04 06:20:57 crc kubenswrapper[4832]: E1204 06:20:57.677784 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\": container with ID starting with 071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b not found: ID does not exist" containerID="071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.677800 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b"} err="failed to get container status \"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\": rpc error: code = NotFound desc = could not find container \"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\": container with ID starting with 071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.677811 4832 scope.go:117] "RemoveContainer" containerID="7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef" Dec 04 06:20:57 crc kubenswrapper[4832]: E1204 06:20:57.678047 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\": container with ID starting with 7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef not found: ID does not exist" containerID="7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.678077 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef"} err="failed to get container status \"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\": rpc error: code = NotFound desc = could not find container \"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\": container with ID starting with 7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.678093 4832 scope.go:117] "RemoveContainer" containerID="69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167" Dec 04 06:20:57 crc kubenswrapper[4832]: E1204 06:20:57.678507 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\": container with ID starting with 69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167 not found: ID does not exist" containerID="69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.678532 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167"} err="failed to get container status \"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\": rpc error: code = NotFound desc = could not find container \"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\": container with ID starting with 69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.678547 4832 scope.go:117] "RemoveContainer" containerID="fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.678786 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7"} err="failed to get container status \"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7\": rpc error: code = NotFound desc = could not find container \"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7\": container with ID starting with fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.678804 4832 scope.go:117] "RemoveContainer" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.679035 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159"} err="failed to get container status \"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\": rpc error: code = NotFound desc = could not find container \"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\": container with ID starting with d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.679049 4832 scope.go:117] "RemoveContainer" containerID="2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.679418 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4"} err="failed to get container status \"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\": rpc error: code = NotFound desc = could not find container \"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\": container with ID starting with 2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.679444 4832 scope.go:117] "RemoveContainer" containerID="3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.679706 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae"} err="failed to get container status \"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\": rpc error: code = NotFound desc = could not find container \"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\": container with ID starting with 3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.679723 4832 scope.go:117] "RemoveContainer" containerID="12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.680063 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3"} err="failed to get container status \"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\": rpc error: code = NotFound desc = could not find container \"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\": container with ID starting with 12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.680079 4832 scope.go:117] "RemoveContainer" containerID="0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.680374 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a"} err="failed to get container status \"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\": rpc error: code = NotFound desc = could not find container \"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\": container with ID starting with 0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.680409 4832 scope.go:117] "RemoveContainer" containerID="5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.682037 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169"} err="failed to get container status \"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\": rpc error: code = NotFound desc = could not find container \"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\": container with ID starting with 5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.682060 4832 scope.go:117] "RemoveContainer" containerID="071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.682329 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b"} err="failed to get container status \"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\": rpc error: code = NotFound desc = could not find container \"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\": container with ID starting with 071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.682346 4832 scope.go:117] "RemoveContainer" containerID="7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.682645 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef"} err="failed to get container status \"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\": rpc error: code = NotFound desc = could not find container \"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\": container with ID starting with 7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.682666 4832 scope.go:117] "RemoveContainer" containerID="69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.682946 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167"} err="failed to get container status \"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\": rpc error: code = NotFound desc = could not find container \"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\": container with ID starting with 69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.682974 4832 scope.go:117] "RemoveContainer" containerID="fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.683295 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7"} err="failed to get container status \"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7\": rpc error: code = NotFound desc = could not find container \"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7\": container with ID starting with fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.683314 4832 scope.go:117] "RemoveContainer" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.683670 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159"} err="failed to get container status \"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\": rpc error: code = NotFound desc = could not find container \"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\": container with ID starting with d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.683695 4832 scope.go:117] "RemoveContainer" containerID="2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.684476 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4"} err="failed to get container status \"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\": rpc error: code = NotFound desc = could not find container \"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\": container with ID starting with 2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.684499 4832 scope.go:117] "RemoveContainer" containerID="3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.693251 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae"} err="failed to get container status \"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\": rpc error: code = NotFound desc = could not find container \"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\": container with ID starting with 3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.693304 4832 scope.go:117] "RemoveContainer" containerID="12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.693958 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3"} err="failed to get container status \"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\": rpc error: code = NotFound desc = could not find container \"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\": container with ID starting with 12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.693980 4832 scope.go:117] "RemoveContainer" containerID="0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.694514 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a"} err="failed to get container status \"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\": rpc error: code = NotFound desc = could not find container \"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\": container with ID starting with 0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.694554 4832 scope.go:117] "RemoveContainer" containerID="5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.694911 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169"} err="failed to get container status \"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\": rpc error: code = NotFound desc = could not find container \"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\": container with ID starting with 5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.694930 4832 scope.go:117] "RemoveContainer" containerID="071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.695171 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b"} err="failed to get container status \"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\": rpc error: code = NotFound desc = could not find container \"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\": container with ID starting with 071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.695213 4832 scope.go:117] "RemoveContainer" containerID="7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.695663 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef"} err="failed to get container status \"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\": rpc error: code = NotFound desc = could not find container \"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\": container with ID starting with 7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.695685 4832 scope.go:117] "RemoveContainer" containerID="69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.695937 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167"} err="failed to get container status \"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\": rpc error: code = NotFound desc = could not find container \"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\": container with ID starting with 69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.695973 4832 scope.go:117] "RemoveContainer" containerID="fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.696585 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7"} err="failed to get container status \"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7\": rpc error: code = NotFound desc = could not find container \"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7\": container with ID starting with fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.696609 4832 scope.go:117] "RemoveContainer" containerID="d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.696908 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159"} err="failed to get container status \"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\": rpc error: code = NotFound desc = could not find container \"d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159\": container with ID starting with d4a946e588cd74e5addff7305ff3918d32e627ea951b64bd206763f553790159 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.696928 4832 scope.go:117] "RemoveContainer" containerID="2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.697204 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4"} err="failed to get container status \"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\": rpc error: code = NotFound desc = could not find container \"2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4\": container with ID starting with 2064ef6c01844eb9a18caa1d13e2d353c211497ac3a24a0517f47087762c45b4 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.697223 4832 scope.go:117] "RemoveContainer" containerID="3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.697458 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae"} err="failed to get container status \"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\": rpc error: code = NotFound desc = could not find container \"3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae\": container with ID starting with 3fc0a3c9c50656554fdf1d03d11563e2faf2b55f666e9ad7648a94ae0fabd9ae not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.697476 4832 scope.go:117] "RemoveContainer" containerID="12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.697814 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3"} err="failed to get container status \"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\": rpc error: code = NotFound desc = could not find container \"12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3\": container with ID starting with 12205c605d5e686dc4ef0c81e2080ce1158b2660f2ffa05f971557821f5632b3 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.697838 4832 scope.go:117] "RemoveContainer" containerID="0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.698246 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a"} err="failed to get container status \"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\": rpc error: code = NotFound desc = could not find container \"0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a\": container with ID starting with 0176c86eca12c9c06f6290d42d6fe84dd0569c04388fc7f4b75736484ba2499a not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.698286 4832 scope.go:117] "RemoveContainer" containerID="5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.698656 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169"} err="failed to get container status \"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\": rpc error: code = NotFound desc = could not find container \"5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169\": container with ID starting with 5d9b9e06c7011c88a5960465e25e1d9da7f7f04c55d69442f824a31514240169 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.698676 4832 scope.go:117] "RemoveContainer" containerID="071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.699219 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b"} err="failed to get container status \"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\": rpc error: code = NotFound desc = could not find container \"071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b\": container with ID starting with 071fc8abefb8b77b8300ce2026b3c603fe2cb78b0468f608aa2f4edb8f638b2b not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.699240 4832 scope.go:117] "RemoveContainer" containerID="7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.699595 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef"} err="failed to get container status \"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\": rpc error: code = NotFound desc = could not find container \"7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef\": container with ID starting with 7138f171ed2d67e887ff60cc3e3c65e3497ca3f7af9a8b387b3056c7292e46ef not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.699643 4832 scope.go:117] "RemoveContainer" containerID="69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.704850 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167"} err="failed to get container status \"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\": rpc error: code = NotFound desc = could not find container \"69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167\": container with ID starting with 69e6b93259657eff1065b303940fa744392ebbe8e256f6d1d40bdef34f885167 not found: ID does not exist" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.704874 4832 scope.go:117] "RemoveContainer" containerID="fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7" Dec 04 06:20:57 crc kubenswrapper[4832]: I1204 06:20:57.705199 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7"} err="failed to get container status \"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7\": rpc error: code = NotFound desc = could not find container \"fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7\": container with ID starting with fd8d92b687c8112abc783fe95d4601662b0812fa2e6e553af315f193076725e7 not found: ID does not exist" Dec 04 06:20:58 crc kubenswrapper[4832]: I1204 06:20:58.415362 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" event={"ID":"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48","Type":"ContainerStarted","Data":"71545f70184b1a72ea0448ad9e0d08cd77b98d676fa856dbf7d96891a3c11069"} Dec 04 06:20:58 crc kubenswrapper[4832]: I1204 06:20:58.415428 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" event={"ID":"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48","Type":"ContainerStarted","Data":"2d7446e65031a5d4ac257096d14338321b825162a8c64bce23da7c3aa97200ca"} Dec 04 06:20:58 crc kubenswrapper[4832]: I1204 06:20:58.415441 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" event={"ID":"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48","Type":"ContainerStarted","Data":"9192b810a9fa3289175eb4438c3cba8632061b9476d0a98101b098ccf0359b7f"} Dec 04 06:20:58 crc kubenswrapper[4832]: I1204 06:20:58.415450 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" event={"ID":"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48","Type":"ContainerStarted","Data":"4c14b004b5e63cbcdbce601ce2349bad42e3c1f1dc816ecdadef8bf2c84a0273"} Dec 04 06:20:58 crc kubenswrapper[4832]: I1204 06:20:58.415458 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" event={"ID":"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48","Type":"ContainerStarted","Data":"3a5362a0ecb0a7cd75163e961e1f5de4d2e0cb4b16f7367d5e131cce47a92aae"} Dec 04 06:20:58 crc kubenswrapper[4832]: I1204 06:20:58.415467 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" event={"ID":"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48","Type":"ContainerStarted","Data":"b84b0dccd1bba7229e74ebf1a4ecc8a787ea4ef34fcdf12a2574953a6dd95566"} Dec 04 06:20:58 crc kubenswrapper[4832]: I1204 06:20:58.721539 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c442d280-de5c-4240-90b3-af48bbb2f1c5" path="/var/lib/kubelet/pods/c442d280-de5c-4240-90b3-af48bbb2f1c5/volumes" Dec 04 06:21:00 crc kubenswrapper[4832]: I1204 06:21:00.436152 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" event={"ID":"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48","Type":"ContainerStarted","Data":"dc3c97cce703b6196b2e2088dc10ac148040d316f3012cbc4bc15b67767f9e94"} Dec 04 06:21:01 crc kubenswrapper[4832]: I1204 06:21:01.445138 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" event={"ID":"ee4adb5f-37ff-4ad0-95a7-844ca6d20a48","Type":"ContainerStarted","Data":"d2e468cb741e5c3ef680c1b33a9f60007ede9b0ef362c1240a403061afd70baf"} Dec 04 06:21:01 crc kubenswrapper[4832]: I1204 06:21:01.445590 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:21:01 crc kubenswrapper[4832]: I1204 06:21:01.445611 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:21:01 crc kubenswrapper[4832]: I1204 06:21:01.474539 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:21:01 crc kubenswrapper[4832]: I1204 06:21:01.478653 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" podStartSLOduration=5.478613673 podStartE2EDuration="5.478613673s" podCreationTimestamp="2025-12-04 06:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:21:01.471591195 +0000 UTC m=+717.084408911" watchObservedRunningTime="2025-12-04 06:21:01.478613673 +0000 UTC m=+717.091431379" Dec 04 06:21:02 crc kubenswrapper[4832]: I1204 06:21:02.450500 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:21:02 crc kubenswrapper[4832]: I1204 06:21:02.474483 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:21:04 crc kubenswrapper[4832]: I1204 06:21:04.971478 4832 scope.go:117] "RemoveContainer" containerID="cfe86cb8678e2b9c22d173d28c52a3845cd10e5da48de718e7230d3af59a77e8" Dec 04 06:21:05 crc kubenswrapper[4832]: I1204 06:21:05.363284 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:21:05 crc kubenswrapper[4832]: I1204 06:21:05.363688 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:21:05 crc kubenswrapper[4832]: I1204 06:21:05.472256 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nl9n_325cffd3-4d6a-4916-8ad9-743cdc486769/kube-multus/2.log" Dec 04 06:21:09 crc kubenswrapper[4832]: I1204 06:21:09.710329 4832 scope.go:117] "RemoveContainer" containerID="884e91c9ce60008aa03c7bf5ca552038900a5fc445619ae1247a88ea68ff4873" Dec 04 06:21:09 crc kubenswrapper[4832]: E1204 06:21:09.712143 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9nl9n_openshift-multus(325cffd3-4d6a-4916-8ad9-743cdc486769)\"" pod="openshift-multus/multus-9nl9n" podUID="325cffd3-4d6a-4916-8ad9-743cdc486769" Dec 04 06:21:22 crc kubenswrapper[4832]: I1204 06:21:22.711127 4832 scope.go:117] "RemoveContainer" containerID="884e91c9ce60008aa03c7bf5ca552038900a5fc445619ae1247a88ea68ff4873" Dec 04 06:21:23 crc kubenswrapper[4832]: I1204 06:21:23.577501 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nl9n_325cffd3-4d6a-4916-8ad9-743cdc486769/kube-multus/2.log" Dec 04 06:21:23 crc kubenswrapper[4832]: I1204 06:21:23.577962 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nl9n" event={"ID":"325cffd3-4d6a-4916-8ad9-743cdc486769","Type":"ContainerStarted","Data":"9009d1d0816c597743b52231a788cbed3cb968455b06a15804f46394825c7d7d"} Dec 04 06:21:26 crc kubenswrapper[4832]: I1204 06:21:26.967994 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9bcbd" Dec 04 06:21:34 crc kubenswrapper[4832]: I1204 06:21:34.520628 4832 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.356947 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4"] Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.358035 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.359800 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.362008 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.362067 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.362114 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.362750 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"469128422ffdf7c9a116b1453571faa4112e83e21b46cb276494efc9be588617"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.362822 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://469128422ffdf7c9a116b1453571faa4112e83e21b46cb276494efc9be588617" gracePeriod=600 Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.366506 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4"] Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.502768 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1076a843-3b6f-4c93-9aa4-0207c2586cbb-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4\" (UID: \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.502846 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1076a843-3b6f-4c93-9aa4-0207c2586cbb-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4\" (UID: \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.502905 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkdvn\" (UniqueName: \"kubernetes.io/projected/1076a843-3b6f-4c93-9aa4-0207c2586cbb-kube-api-access-qkdvn\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4\" (UID: \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.603979 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1076a843-3b6f-4c93-9aa4-0207c2586cbb-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4\" (UID: \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.604627 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkdvn\" (UniqueName: \"kubernetes.io/projected/1076a843-3b6f-4c93-9aa4-0207c2586cbb-kube-api-access-qkdvn\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4\" (UID: \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.604538 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1076a843-3b6f-4c93-9aa4-0207c2586cbb-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4\" (UID: \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.604723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1076a843-3b6f-4c93-9aa4-0207c2586cbb-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4\" (UID: \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.605102 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1076a843-3b6f-4c93-9aa4-0207c2586cbb-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4\" (UID: \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.628167 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkdvn\" (UniqueName: \"kubernetes.io/projected/1076a843-3b6f-4c93-9aa4-0207c2586cbb-kube-api-access-qkdvn\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4\" (UID: \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.644801 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="469128422ffdf7c9a116b1453571faa4112e83e21b46cb276494efc9be588617" exitCode=0 Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.644847 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"469128422ffdf7c9a116b1453571faa4112e83e21b46cb276494efc9be588617"} Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.644878 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"16a2a9ef3e62675c85662671dfe30288c81082d91cc1c7e3a8b0d7e2b9dfbee1"} Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.644896 4832 scope.go:117] "RemoveContainer" containerID="a310df5aafadcc8efe4afcfdedff7303ce96555ee4dee978bd0e572554bab684" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.673438 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:35 crc kubenswrapper[4832]: I1204 06:21:35.861654 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4"] Dec 04 06:21:35 crc kubenswrapper[4832]: W1204 06:21:35.865728 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1076a843_3b6f_4c93_9aa4_0207c2586cbb.slice/crio-ef8f3d65cfc402218dc60b50220bfbabd61233d675e58a6f884c723c2ec4b08f WatchSource:0}: Error finding container ef8f3d65cfc402218dc60b50220bfbabd61233d675e58a6f884c723c2ec4b08f: Status 404 returned error can't find the container with id ef8f3d65cfc402218dc60b50220bfbabd61233d675e58a6f884c723c2ec4b08f Dec 04 06:21:36 crc kubenswrapper[4832]: I1204 06:21:36.651283 4832 generic.go:334] "Generic (PLEG): container finished" podID="1076a843-3b6f-4c93-9aa4-0207c2586cbb" containerID="24b95521ebe71a5464827e6d2d440782315c169bf0ecbf9768527cfd7e82c0f5" exitCode=0 Dec 04 06:21:36 crc kubenswrapper[4832]: I1204 06:21:36.651490 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" event={"ID":"1076a843-3b6f-4c93-9aa4-0207c2586cbb","Type":"ContainerDied","Data":"24b95521ebe71a5464827e6d2d440782315c169bf0ecbf9768527cfd7e82c0f5"} Dec 04 06:21:36 crc kubenswrapper[4832]: I1204 06:21:36.651828 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" event={"ID":"1076a843-3b6f-4c93-9aa4-0207c2586cbb","Type":"ContainerStarted","Data":"ef8f3d65cfc402218dc60b50220bfbabd61233d675e58a6f884c723c2ec4b08f"} Dec 04 06:21:37 crc kubenswrapper[4832]: I1204 06:21:37.718212 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sdwhj"] Dec 04 06:21:37 crc kubenswrapper[4832]: I1204 06:21:37.719510 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:37 crc kubenswrapper[4832]: I1204 06:21:37.731414 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdwhj"] Dec 04 06:21:37 crc kubenswrapper[4832]: I1204 06:21:37.834115 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltxwh\" (UniqueName: \"kubernetes.io/projected/c8e31163-ee23-40d8-b06a-305218c2ff30-kube-api-access-ltxwh\") pod \"redhat-operators-sdwhj\" (UID: \"c8e31163-ee23-40d8-b06a-305218c2ff30\") " pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:37 crc kubenswrapper[4832]: I1204 06:21:37.834234 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e31163-ee23-40d8-b06a-305218c2ff30-utilities\") pod \"redhat-operators-sdwhj\" (UID: \"c8e31163-ee23-40d8-b06a-305218c2ff30\") " pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:37 crc kubenswrapper[4832]: I1204 06:21:37.834278 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e31163-ee23-40d8-b06a-305218c2ff30-catalog-content\") pod \"redhat-operators-sdwhj\" (UID: \"c8e31163-ee23-40d8-b06a-305218c2ff30\") " pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:37 crc kubenswrapper[4832]: I1204 06:21:37.935954 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e31163-ee23-40d8-b06a-305218c2ff30-catalog-content\") pod \"redhat-operators-sdwhj\" (UID: \"c8e31163-ee23-40d8-b06a-305218c2ff30\") " pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:37 crc kubenswrapper[4832]: I1204 06:21:37.936015 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltxwh\" (UniqueName: \"kubernetes.io/projected/c8e31163-ee23-40d8-b06a-305218c2ff30-kube-api-access-ltxwh\") pod \"redhat-operators-sdwhj\" (UID: \"c8e31163-ee23-40d8-b06a-305218c2ff30\") " pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:37 crc kubenswrapper[4832]: I1204 06:21:37.936078 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e31163-ee23-40d8-b06a-305218c2ff30-utilities\") pod \"redhat-operators-sdwhj\" (UID: \"c8e31163-ee23-40d8-b06a-305218c2ff30\") " pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:37 crc kubenswrapper[4832]: I1204 06:21:37.936608 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e31163-ee23-40d8-b06a-305218c2ff30-catalog-content\") pod \"redhat-operators-sdwhj\" (UID: \"c8e31163-ee23-40d8-b06a-305218c2ff30\") " pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:37 crc kubenswrapper[4832]: I1204 06:21:37.936654 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e31163-ee23-40d8-b06a-305218c2ff30-utilities\") pod \"redhat-operators-sdwhj\" (UID: \"c8e31163-ee23-40d8-b06a-305218c2ff30\") " pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:37 crc kubenswrapper[4832]: I1204 06:21:37.955967 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltxwh\" (UniqueName: \"kubernetes.io/projected/c8e31163-ee23-40d8-b06a-305218c2ff30-kube-api-access-ltxwh\") pod \"redhat-operators-sdwhj\" (UID: \"c8e31163-ee23-40d8-b06a-305218c2ff30\") " pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:38 crc kubenswrapper[4832]: I1204 06:21:38.036519 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:38 crc kubenswrapper[4832]: I1204 06:21:38.244102 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdwhj"] Dec 04 06:21:38 crc kubenswrapper[4832]: I1204 06:21:38.668820 4832 generic.go:334] "Generic (PLEG): container finished" podID="1076a843-3b6f-4c93-9aa4-0207c2586cbb" containerID="474e7370766e4181a625fb261022088f57853870e3d73b72dd6266956622f544" exitCode=0 Dec 04 06:21:38 crc kubenswrapper[4832]: I1204 06:21:38.668934 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" event={"ID":"1076a843-3b6f-4c93-9aa4-0207c2586cbb","Type":"ContainerDied","Data":"474e7370766e4181a625fb261022088f57853870e3d73b72dd6266956622f544"} Dec 04 06:21:38 crc kubenswrapper[4832]: I1204 06:21:38.670777 4832 generic.go:334] "Generic (PLEG): container finished" podID="c8e31163-ee23-40d8-b06a-305218c2ff30" containerID="a957421147252d22203b9b4543786407a19349c95319b16c1373d13d2734acb6" exitCode=0 Dec 04 06:21:38 crc kubenswrapper[4832]: I1204 06:21:38.670813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdwhj" event={"ID":"c8e31163-ee23-40d8-b06a-305218c2ff30","Type":"ContainerDied","Data":"a957421147252d22203b9b4543786407a19349c95319b16c1373d13d2734acb6"} Dec 04 06:21:38 crc kubenswrapper[4832]: I1204 06:21:38.670843 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdwhj" event={"ID":"c8e31163-ee23-40d8-b06a-305218c2ff30","Type":"ContainerStarted","Data":"9aa2be53bc974432d25435a37d5a3e20fab93189ee3060cf8673aa57585fb66d"} Dec 04 06:21:39 crc kubenswrapper[4832]: I1204 06:21:39.679131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdwhj" event={"ID":"c8e31163-ee23-40d8-b06a-305218c2ff30","Type":"ContainerStarted","Data":"7365b45ee57f6e8b2928345996186992b7b1a0aba083bc0c8718530ccfb0423c"} Dec 04 06:21:39 crc kubenswrapper[4832]: I1204 06:21:39.682517 4832 generic.go:334] "Generic (PLEG): container finished" podID="1076a843-3b6f-4c93-9aa4-0207c2586cbb" containerID="31d23003821571a0ca70264ef738e566fbdf66e768c32071fbef1d1464337e86" exitCode=0 Dec 04 06:21:39 crc kubenswrapper[4832]: I1204 06:21:39.682625 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" event={"ID":"1076a843-3b6f-4c93-9aa4-0207c2586cbb","Type":"ContainerDied","Data":"31d23003821571a0ca70264ef738e566fbdf66e768c32071fbef1d1464337e86"} Dec 04 06:21:40 crc kubenswrapper[4832]: I1204 06:21:40.690567 4832 generic.go:334] "Generic (PLEG): container finished" podID="c8e31163-ee23-40d8-b06a-305218c2ff30" containerID="7365b45ee57f6e8b2928345996186992b7b1a0aba083bc0c8718530ccfb0423c" exitCode=0 Dec 04 06:21:40 crc kubenswrapper[4832]: I1204 06:21:40.690611 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdwhj" event={"ID":"c8e31163-ee23-40d8-b06a-305218c2ff30","Type":"ContainerDied","Data":"7365b45ee57f6e8b2928345996186992b7b1a0aba083bc0c8718530ccfb0423c"} Dec 04 06:21:40 crc kubenswrapper[4832]: I1204 06:21:40.920494 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.078271 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1076a843-3b6f-4c93-9aa4-0207c2586cbb-bundle\") pod \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\" (UID: \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\") " Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.078752 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1076a843-3b6f-4c93-9aa4-0207c2586cbb-util\") pod \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\" (UID: \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\") " Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.078819 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkdvn\" (UniqueName: \"kubernetes.io/projected/1076a843-3b6f-4c93-9aa4-0207c2586cbb-kube-api-access-qkdvn\") pod \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\" (UID: \"1076a843-3b6f-4c93-9aa4-0207c2586cbb\") " Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.079235 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1076a843-3b6f-4c93-9aa4-0207c2586cbb-bundle" (OuterVolumeSpecName: "bundle") pod "1076a843-3b6f-4c93-9aa4-0207c2586cbb" (UID: "1076a843-3b6f-4c93-9aa4-0207c2586cbb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.085093 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1076a843-3b6f-4c93-9aa4-0207c2586cbb-kube-api-access-qkdvn" (OuterVolumeSpecName: "kube-api-access-qkdvn") pod "1076a843-3b6f-4c93-9aa4-0207c2586cbb" (UID: "1076a843-3b6f-4c93-9aa4-0207c2586cbb"). InnerVolumeSpecName "kube-api-access-qkdvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.096428 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1076a843-3b6f-4c93-9aa4-0207c2586cbb-util" (OuterVolumeSpecName: "util") pod "1076a843-3b6f-4c93-9aa4-0207c2586cbb" (UID: "1076a843-3b6f-4c93-9aa4-0207c2586cbb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.180582 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1076a843-3b6f-4c93-9aa4-0207c2586cbb-util\") on node \"crc\" DevicePath \"\"" Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.180773 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkdvn\" (UniqueName: \"kubernetes.io/projected/1076a843-3b6f-4c93-9aa4-0207c2586cbb-kube-api-access-qkdvn\") on node \"crc\" DevicePath \"\"" Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.180849 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1076a843-3b6f-4c93-9aa4-0207c2586cbb-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.697881 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" event={"ID":"1076a843-3b6f-4c93-9aa4-0207c2586cbb","Type":"ContainerDied","Data":"ef8f3d65cfc402218dc60b50220bfbabd61233d675e58a6f884c723c2ec4b08f"} Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.697919 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef8f3d65cfc402218dc60b50220bfbabd61233d675e58a6f884c723c2ec4b08f" Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.697955 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4" Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.700617 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdwhj" event={"ID":"c8e31163-ee23-40d8-b06a-305218c2ff30","Type":"ContainerStarted","Data":"4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892"} Dec 04 06:21:41 crc kubenswrapper[4832]: I1204 06:21:41.954878 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sdwhj" podStartSLOduration=2.545555712 podStartE2EDuration="4.954855796s" podCreationTimestamp="2025-12-04 06:21:37 +0000 UTC" firstStartedPulling="2025-12-04 06:21:38.672504189 +0000 UTC m=+754.285321895" lastFinishedPulling="2025-12-04 06:21:41.081804283 +0000 UTC m=+756.694621979" observedRunningTime="2025-12-04 06:21:41.719776097 +0000 UTC m=+757.332593823" watchObservedRunningTime="2025-12-04 06:21:41.954855796 +0000 UTC m=+757.567673502" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.625696 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-rqknt"] Dec 04 06:21:45 crc kubenswrapper[4832]: E1204 06:21:45.627355 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1076a843-3b6f-4c93-9aa4-0207c2586cbb" containerName="util" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.627450 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1076a843-3b6f-4c93-9aa4-0207c2586cbb" containerName="util" Dec 04 06:21:45 crc kubenswrapper[4832]: E1204 06:21:45.627512 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1076a843-3b6f-4c93-9aa4-0207c2586cbb" containerName="extract" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.627573 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1076a843-3b6f-4c93-9aa4-0207c2586cbb" containerName="extract" Dec 04 06:21:45 crc kubenswrapper[4832]: E1204 06:21:45.627629 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1076a843-3b6f-4c93-9aa4-0207c2586cbb" containerName="pull" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.627679 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1076a843-3b6f-4c93-9aa4-0207c2586cbb" containerName="pull" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.627829 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1076a843-3b6f-4c93-9aa4-0207c2586cbb" containerName="extract" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.628230 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-rqknt" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.630022 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.630280 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.630518 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-c9zdz" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.644563 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-rqknt"] Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.736110 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59l62\" (UniqueName: \"kubernetes.io/projected/2a85a45c-df69-4030-af49-e7f2bb0b755e-kube-api-access-59l62\") pod \"nmstate-operator-5b5b58f5c8-rqknt\" (UID: \"2a85a45c-df69-4030-af49-e7f2bb0b755e\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-rqknt" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.837163 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59l62\" (UniqueName: \"kubernetes.io/projected/2a85a45c-df69-4030-af49-e7f2bb0b755e-kube-api-access-59l62\") pod \"nmstate-operator-5b5b58f5c8-rqknt\" (UID: \"2a85a45c-df69-4030-af49-e7f2bb0b755e\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-rqknt" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.865926 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59l62\" (UniqueName: \"kubernetes.io/projected/2a85a45c-df69-4030-af49-e7f2bb0b755e-kube-api-access-59l62\") pod \"nmstate-operator-5b5b58f5c8-rqknt\" (UID: \"2a85a45c-df69-4030-af49-e7f2bb0b755e\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-rqknt" Dec 04 06:21:45 crc kubenswrapper[4832]: I1204 06:21:45.953745 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-rqknt" Dec 04 06:21:46 crc kubenswrapper[4832]: I1204 06:21:46.159583 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-rqknt"] Dec 04 06:21:46 crc kubenswrapper[4832]: I1204 06:21:46.740153 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-rqknt" event={"ID":"2a85a45c-df69-4030-af49-e7f2bb0b755e","Type":"ContainerStarted","Data":"f09e57690028c82c832d985c8bd0bc70b4fb8f57ae53eb4282ea7bf57757f389"} Dec 04 06:21:48 crc kubenswrapper[4832]: I1204 06:21:48.037580 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:48 crc kubenswrapper[4832]: I1204 06:21:48.038049 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:48 crc kubenswrapper[4832]: I1204 06:21:48.085033 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:48 crc kubenswrapper[4832]: I1204 06:21:48.796850 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:49 crc kubenswrapper[4832]: I1204 06:21:49.759018 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-rqknt" event={"ID":"2a85a45c-df69-4030-af49-e7f2bb0b755e","Type":"ContainerStarted","Data":"09f7a3625cfdc54d283b1c60010592ca630dc0ef034a6b4151c2bb0e34a2e6e1"} Dec 04 06:21:50 crc kubenswrapper[4832]: I1204 06:21:50.707761 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-rqknt" podStartSLOduration=2.691986122 podStartE2EDuration="5.70774166s" podCreationTimestamp="2025-12-04 06:21:45 +0000 UTC" firstStartedPulling="2025-12-04 06:21:46.176140408 +0000 UTC m=+761.788958104" lastFinishedPulling="2025-12-04 06:21:49.191895936 +0000 UTC m=+764.804713642" observedRunningTime="2025-12-04 06:21:49.785684409 +0000 UTC m=+765.398502115" watchObservedRunningTime="2025-12-04 06:21:50.70774166 +0000 UTC m=+766.320559366" Dec 04 06:21:50 crc kubenswrapper[4832]: I1204 06:21:50.708792 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdwhj"] Dec 04 06:21:51 crc kubenswrapper[4832]: I1204 06:21:51.771002 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sdwhj" podUID="c8e31163-ee23-40d8-b06a-305218c2ff30" containerName="registry-server" containerID="cri-o://4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892" gracePeriod=2 Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.214734 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.336006 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e31163-ee23-40d8-b06a-305218c2ff30-utilities\") pod \"c8e31163-ee23-40d8-b06a-305218c2ff30\" (UID: \"c8e31163-ee23-40d8-b06a-305218c2ff30\") " Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.336123 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltxwh\" (UniqueName: \"kubernetes.io/projected/c8e31163-ee23-40d8-b06a-305218c2ff30-kube-api-access-ltxwh\") pod \"c8e31163-ee23-40d8-b06a-305218c2ff30\" (UID: \"c8e31163-ee23-40d8-b06a-305218c2ff30\") " Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.336203 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e31163-ee23-40d8-b06a-305218c2ff30-catalog-content\") pod \"c8e31163-ee23-40d8-b06a-305218c2ff30\" (UID: \"c8e31163-ee23-40d8-b06a-305218c2ff30\") " Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.337609 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e31163-ee23-40d8-b06a-305218c2ff30-utilities" (OuterVolumeSpecName: "utilities") pod "c8e31163-ee23-40d8-b06a-305218c2ff30" (UID: "c8e31163-ee23-40d8-b06a-305218c2ff30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.343607 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e31163-ee23-40d8-b06a-305218c2ff30-kube-api-access-ltxwh" (OuterVolumeSpecName: "kube-api-access-ltxwh") pod "c8e31163-ee23-40d8-b06a-305218c2ff30" (UID: "c8e31163-ee23-40d8-b06a-305218c2ff30"). InnerVolumeSpecName "kube-api-access-ltxwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.437803 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e31163-ee23-40d8-b06a-305218c2ff30-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.437847 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltxwh\" (UniqueName: \"kubernetes.io/projected/c8e31163-ee23-40d8-b06a-305218c2ff30-kube-api-access-ltxwh\") on node \"crc\" DevicePath \"\"" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.443134 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e31163-ee23-40d8-b06a-305218c2ff30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8e31163-ee23-40d8-b06a-305218c2ff30" (UID: "c8e31163-ee23-40d8-b06a-305218c2ff30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.539698 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e31163-ee23-40d8-b06a-305218c2ff30-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.782951 4832 generic.go:334] "Generic (PLEG): container finished" podID="c8e31163-ee23-40d8-b06a-305218c2ff30" containerID="4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892" exitCode=0 Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.783006 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdwhj" event={"ID":"c8e31163-ee23-40d8-b06a-305218c2ff30","Type":"ContainerDied","Data":"4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892"} Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.783039 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdwhj" event={"ID":"c8e31163-ee23-40d8-b06a-305218c2ff30","Type":"ContainerDied","Data":"9aa2be53bc974432d25435a37d5a3e20fab93189ee3060cf8673aa57585fb66d"} Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.783056 4832 scope.go:117] "RemoveContainer" containerID="4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.783009 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdwhj" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.800646 4832 scope.go:117] "RemoveContainer" containerID="7365b45ee57f6e8b2928345996186992b7b1a0aba083bc0c8718530ccfb0423c" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.813811 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdwhj"] Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.818109 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sdwhj"] Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.830933 4832 scope.go:117] "RemoveContainer" containerID="a957421147252d22203b9b4543786407a19349c95319b16c1373d13d2734acb6" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.847351 4832 scope.go:117] "RemoveContainer" containerID="4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892" Dec 04 06:21:53 crc kubenswrapper[4832]: E1204 06:21:53.847845 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892\": container with ID starting with 4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892 not found: ID does not exist" containerID="4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.847886 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892"} err="failed to get container status \"4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892\": rpc error: code = NotFound desc = could not find container \"4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892\": container with ID starting with 4514dcacc2ac5cd652fde0c2d1116d4489a48142073592c70b0a761209210892 not found: ID does not exist" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.847908 4832 scope.go:117] "RemoveContainer" containerID="7365b45ee57f6e8b2928345996186992b7b1a0aba083bc0c8718530ccfb0423c" Dec 04 06:21:53 crc kubenswrapper[4832]: E1204 06:21:53.848143 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7365b45ee57f6e8b2928345996186992b7b1a0aba083bc0c8718530ccfb0423c\": container with ID starting with 7365b45ee57f6e8b2928345996186992b7b1a0aba083bc0c8718530ccfb0423c not found: ID does not exist" containerID="7365b45ee57f6e8b2928345996186992b7b1a0aba083bc0c8718530ccfb0423c" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.848174 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7365b45ee57f6e8b2928345996186992b7b1a0aba083bc0c8718530ccfb0423c"} err="failed to get container status \"7365b45ee57f6e8b2928345996186992b7b1a0aba083bc0c8718530ccfb0423c\": rpc error: code = NotFound desc = could not find container \"7365b45ee57f6e8b2928345996186992b7b1a0aba083bc0c8718530ccfb0423c\": container with ID starting with 7365b45ee57f6e8b2928345996186992b7b1a0aba083bc0c8718530ccfb0423c not found: ID does not exist" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.848193 4832 scope.go:117] "RemoveContainer" containerID="a957421147252d22203b9b4543786407a19349c95319b16c1373d13d2734acb6" Dec 04 06:21:53 crc kubenswrapper[4832]: E1204 06:21:53.848548 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a957421147252d22203b9b4543786407a19349c95319b16c1373d13d2734acb6\": container with ID starting with a957421147252d22203b9b4543786407a19349c95319b16c1373d13d2734acb6 not found: ID does not exist" containerID="a957421147252d22203b9b4543786407a19349c95319b16c1373d13d2734acb6" Dec 04 06:21:53 crc kubenswrapper[4832]: I1204 06:21:53.848594 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a957421147252d22203b9b4543786407a19349c95319b16c1373d13d2734acb6"} err="failed to get container status \"a957421147252d22203b9b4543786407a19349c95319b16c1373d13d2734acb6\": rpc error: code = NotFound desc = could not find container \"a957421147252d22203b9b4543786407a19349c95319b16c1373d13d2734acb6\": container with ID starting with a957421147252d22203b9b4543786407a19349c95319b16c1373d13d2734acb6 not found: ID does not exist" Dec 04 06:21:54 crc kubenswrapper[4832]: I1204 06:21:54.718979 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e31163-ee23-40d8-b06a-305218c2ff30" path="/var/lib/kubelet/pods/c8e31163-ee23-40d8-b06a-305218c2ff30/volumes" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.673515 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-gcvj4"] Dec 04 06:21:55 crc kubenswrapper[4832]: E1204 06:21:55.674139 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e31163-ee23-40d8-b06a-305218c2ff30" containerName="extract-utilities" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.674175 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e31163-ee23-40d8-b06a-305218c2ff30" containerName="extract-utilities" Dec 04 06:21:55 crc kubenswrapper[4832]: E1204 06:21:55.674201 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e31163-ee23-40d8-b06a-305218c2ff30" containerName="registry-server" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.674210 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e31163-ee23-40d8-b06a-305218c2ff30" containerName="registry-server" Dec 04 06:21:55 crc kubenswrapper[4832]: E1204 06:21:55.674227 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e31163-ee23-40d8-b06a-305218c2ff30" containerName="extract-content" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.674238 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e31163-ee23-40d8-b06a-305218c2ff30" containerName="extract-content" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.674480 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e31163-ee23-40d8-b06a-305218c2ff30" containerName="registry-server" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.675586 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gcvj4" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.677509 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk"] Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.677721 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-fqvjn" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.678342 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.679675 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.687269 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-gcvj4"] Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.745268 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4qw9l"] Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.746163 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.770004 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/097d6138-4a11-4545-bb6e-a61ea6cff7fb-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-6ntwk\" (UID: \"097d6138-4a11-4545-bb6e-a61ea6cff7fb\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.770066 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t5xw\" (UniqueName: \"kubernetes.io/projected/a6d2dc02-8689-4c6b-bde6-f9120db9f714-kube-api-access-6t5xw\") pod \"nmstate-metrics-7f946cbc9-gcvj4\" (UID: \"a6d2dc02-8689-4c6b-bde6-f9120db9f714\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gcvj4" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.770094 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65gk9\" (UniqueName: \"kubernetes.io/projected/097d6138-4a11-4545-bb6e-a61ea6cff7fb-kube-api-access-65gk9\") pod \"nmstate-webhook-5f6d4c5ccb-6ntwk\" (UID: \"097d6138-4a11-4545-bb6e-a61ea6cff7fb\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.773015 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk"] Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.808008 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v"] Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.808902 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.810751 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.811195 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-4p4gd" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.811355 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.837967 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v"] Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.872010 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/097d6138-4a11-4545-bb6e-a61ea6cff7fb-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-6ntwk\" (UID: \"097d6138-4a11-4545-bb6e-a61ea6cff7fb\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.872546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3d1046ad-79df-4e1c-8c25-6af2a0379417-ovs-socket\") pod \"nmstate-handler-4qw9l\" (UID: \"3d1046ad-79df-4e1c-8c25-6af2a0379417\") " pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.872599 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t5xw\" (UniqueName: \"kubernetes.io/projected/a6d2dc02-8689-4c6b-bde6-f9120db9f714-kube-api-access-6t5xw\") pod \"nmstate-metrics-7f946cbc9-gcvj4\" (UID: \"a6d2dc02-8689-4c6b-bde6-f9120db9f714\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gcvj4" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.872635 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65gk9\" (UniqueName: \"kubernetes.io/projected/097d6138-4a11-4545-bb6e-a61ea6cff7fb-kube-api-access-65gk9\") pod \"nmstate-webhook-5f6d4c5ccb-6ntwk\" (UID: \"097d6138-4a11-4545-bb6e-a61ea6cff7fb\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.872835 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qgb4\" (UniqueName: \"kubernetes.io/projected/3d1046ad-79df-4e1c-8c25-6af2a0379417-kube-api-access-8qgb4\") pod \"nmstate-handler-4qw9l\" (UID: \"3d1046ad-79df-4e1c-8c25-6af2a0379417\") " pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.872899 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3d1046ad-79df-4e1c-8c25-6af2a0379417-nmstate-lock\") pod \"nmstate-handler-4qw9l\" (UID: \"3d1046ad-79df-4e1c-8c25-6af2a0379417\") " pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.873027 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3d1046ad-79df-4e1c-8c25-6af2a0379417-dbus-socket\") pod \"nmstate-handler-4qw9l\" (UID: \"3d1046ad-79df-4e1c-8c25-6af2a0379417\") " pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.877945 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/097d6138-4a11-4545-bb6e-a61ea6cff7fb-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-6ntwk\" (UID: \"097d6138-4a11-4545-bb6e-a61ea6cff7fb\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.887914 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t5xw\" (UniqueName: \"kubernetes.io/projected/a6d2dc02-8689-4c6b-bde6-f9120db9f714-kube-api-access-6t5xw\") pod \"nmstate-metrics-7f946cbc9-gcvj4\" (UID: \"a6d2dc02-8689-4c6b-bde6-f9120db9f714\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gcvj4" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.890660 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65gk9\" (UniqueName: \"kubernetes.io/projected/097d6138-4a11-4545-bb6e-a61ea6cff7fb-kube-api-access-65gk9\") pod \"nmstate-webhook-5f6d4c5ccb-6ntwk\" (UID: \"097d6138-4a11-4545-bb6e-a61ea6cff7fb\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.974338 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/85a05826-c1ab-484b-b658-051dc78add17-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-jfb7v\" (UID: \"85a05826-c1ab-484b-b658-051dc78add17\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.974433 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qgb4\" (UniqueName: \"kubernetes.io/projected/3d1046ad-79df-4e1c-8c25-6af2a0379417-kube-api-access-8qgb4\") pod \"nmstate-handler-4qw9l\" (UID: \"3d1046ad-79df-4e1c-8c25-6af2a0379417\") " pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.974469 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3d1046ad-79df-4e1c-8c25-6af2a0379417-nmstate-lock\") pod \"nmstate-handler-4qw9l\" (UID: \"3d1046ad-79df-4e1c-8c25-6af2a0379417\") " pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.974509 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3d1046ad-79df-4e1c-8c25-6af2a0379417-dbus-socket\") pod \"nmstate-handler-4qw9l\" (UID: \"3d1046ad-79df-4e1c-8c25-6af2a0379417\") " pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.974549 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3d1046ad-79df-4e1c-8c25-6af2a0379417-ovs-socket\") pod \"nmstate-handler-4qw9l\" (UID: \"3d1046ad-79df-4e1c-8c25-6af2a0379417\") " pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.974578 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/85a05826-c1ab-484b-b658-051dc78add17-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-jfb7v\" (UID: \"85a05826-c1ab-484b-b658-051dc78add17\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.974578 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3d1046ad-79df-4e1c-8c25-6af2a0379417-nmstate-lock\") pod \"nmstate-handler-4qw9l\" (UID: \"3d1046ad-79df-4e1c-8c25-6af2a0379417\") " pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.974603 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vd2s\" (UniqueName: \"kubernetes.io/projected/85a05826-c1ab-484b-b658-051dc78add17-kube-api-access-7vd2s\") pod \"nmstate-console-plugin-7fbb5f6569-jfb7v\" (UID: \"85a05826-c1ab-484b-b658-051dc78add17\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.974747 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3d1046ad-79df-4e1c-8c25-6af2a0379417-ovs-socket\") pod \"nmstate-handler-4qw9l\" (UID: \"3d1046ad-79df-4e1c-8c25-6af2a0379417\") " pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:55 crc kubenswrapper[4832]: I1204 06:21:55.974894 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3d1046ad-79df-4e1c-8c25-6af2a0379417-dbus-socket\") pod \"nmstate-handler-4qw9l\" (UID: \"3d1046ad-79df-4e1c-8c25-6af2a0379417\") " pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.000353 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qgb4\" (UniqueName: \"kubernetes.io/projected/3d1046ad-79df-4e1c-8c25-6af2a0379417-kube-api-access-8qgb4\") pod \"nmstate-handler-4qw9l\" (UID: \"3d1046ad-79df-4e1c-8c25-6af2a0379417\") " pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.005522 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5f4778f56f-vt7b4"] Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.008917 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.037484 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f4778f56f-vt7b4"] Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.055563 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gcvj4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.064138 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.080747 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.084962 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/85a05826-c1ab-484b-b658-051dc78add17-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-jfb7v\" (UID: \"85a05826-c1ab-484b-b658-051dc78add17\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.085015 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vd2s\" (UniqueName: \"kubernetes.io/projected/85a05826-c1ab-484b-b658-051dc78add17-kube-api-access-7vd2s\") pod \"nmstate-console-plugin-7fbb5f6569-jfb7v\" (UID: \"85a05826-c1ab-484b-b658-051dc78add17\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.085059 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/85a05826-c1ab-484b-b658-051dc78add17-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-jfb7v\" (UID: \"85a05826-c1ab-484b-b658-051dc78add17\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.086507 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/85a05826-c1ab-484b-b658-051dc78add17-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-jfb7v\" (UID: \"85a05826-c1ab-484b-b658-051dc78add17\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.094735 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/85a05826-c1ab-484b-b658-051dc78add17-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-jfb7v\" (UID: \"85a05826-c1ab-484b-b658-051dc78add17\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.110172 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vd2s\" (UniqueName: \"kubernetes.io/projected/85a05826-c1ab-484b-b658-051dc78add17-kube-api-access-7vd2s\") pod \"nmstate-console-plugin-7fbb5f6569-jfb7v\" (UID: \"85a05826-c1ab-484b-b658-051dc78add17\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" Dec 04 06:21:56 crc kubenswrapper[4832]: W1204 06:21:56.116242 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d1046ad_79df_4e1c_8c25_6af2a0379417.slice/crio-5fba84b38f592d8d5ff9626e18b2297be8c47cc1498be19dcbbb9664e2ede7cc WatchSource:0}: Error finding container 5fba84b38f592d8d5ff9626e18b2297be8c47cc1498be19dcbbb9664e2ede7cc: Status 404 returned error can't find the container with id 5fba84b38f592d8d5ff9626e18b2297be8c47cc1498be19dcbbb9664e2ede7cc Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.139451 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.189102 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xhjx\" (UniqueName: \"kubernetes.io/projected/bdc820cf-53f1-433e-99c7-c9e92f476e8f-kube-api-access-4xhjx\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.189165 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdc820cf-53f1-433e-99c7-c9e92f476e8f-oauth-serving-cert\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.189190 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdc820cf-53f1-433e-99c7-c9e92f476e8f-console-serving-cert\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.190751 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdc820cf-53f1-433e-99c7-c9e92f476e8f-console-config\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.190895 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdc820cf-53f1-433e-99c7-c9e92f476e8f-service-ca\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.191022 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdc820cf-53f1-433e-99c7-c9e92f476e8f-console-oauth-config\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.191077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdc820cf-53f1-433e-99c7-c9e92f476e8f-trusted-ca-bundle\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.292493 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdc820cf-53f1-433e-99c7-c9e92f476e8f-trusted-ca-bundle\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.292565 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xhjx\" (UniqueName: \"kubernetes.io/projected/bdc820cf-53f1-433e-99c7-c9e92f476e8f-kube-api-access-4xhjx\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.292591 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdc820cf-53f1-433e-99c7-c9e92f476e8f-oauth-serving-cert\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.292738 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdc820cf-53f1-433e-99c7-c9e92f476e8f-console-serving-cert\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.292795 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdc820cf-53f1-433e-99c7-c9e92f476e8f-console-config\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.292853 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdc820cf-53f1-433e-99c7-c9e92f476e8f-service-ca\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.292888 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdc820cf-53f1-433e-99c7-c9e92f476e8f-console-oauth-config\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.294593 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdc820cf-53f1-433e-99c7-c9e92f476e8f-oauth-serving-cert\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.294806 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdc820cf-53f1-433e-99c7-c9e92f476e8f-console-config\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.295779 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdc820cf-53f1-433e-99c7-c9e92f476e8f-service-ca\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.297187 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdc820cf-53f1-433e-99c7-c9e92f476e8f-trusted-ca-bundle\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.302182 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdc820cf-53f1-433e-99c7-c9e92f476e8f-console-serving-cert\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.302662 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdc820cf-53f1-433e-99c7-c9e92f476e8f-console-oauth-config\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.321338 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xhjx\" (UniqueName: \"kubernetes.io/projected/bdc820cf-53f1-433e-99c7-c9e92f476e8f-kube-api-access-4xhjx\") pod \"console-5f4778f56f-vt7b4\" (UID: \"bdc820cf-53f1-433e-99c7-c9e92f476e8f\") " pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.335068 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.407869 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-gcvj4"] Dec 04 06:21:56 crc kubenswrapper[4832]: W1204 06:21:56.420028 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6d2dc02_8689_4c6b_bde6_f9120db9f714.slice/crio-c7901977f73ad5835e6af51c9dfd92b819758a0371a1a4f50c9ec2176224d1ee WatchSource:0}: Error finding container c7901977f73ad5835e6af51c9dfd92b819758a0371a1a4f50c9ec2176224d1ee: Status 404 returned error can't find the container with id c7901977f73ad5835e6af51c9dfd92b819758a0371a1a4f50c9ec2176224d1ee Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.461013 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk"] Dec 04 06:21:56 crc kubenswrapper[4832]: W1204 06:21:56.478203 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod097d6138_4a11_4545_bb6e_a61ea6cff7fb.slice/crio-ba94d8625454bffbab17f644235dc6372f1e3cddb59e42829632aeaf740109b0 WatchSource:0}: Error finding container ba94d8625454bffbab17f644235dc6372f1e3cddb59e42829632aeaf740109b0: Status 404 returned error can't find the container with id ba94d8625454bffbab17f644235dc6372f1e3cddb59e42829632aeaf740109b0 Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.508280 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v"] Dec 04 06:21:56 crc kubenswrapper[4832]: W1204 06:21:56.510678 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85a05826_c1ab_484b_b658_051dc78add17.slice/crio-1cb56c1028a7c362218ef4198e7a06a3a9b8b5cb6b1051bd89020d74303871fb WatchSource:0}: Error finding container 1cb56c1028a7c362218ef4198e7a06a3a9b8b5cb6b1051bd89020d74303871fb: Status 404 returned error can't find the container with id 1cb56c1028a7c362218ef4198e7a06a3a9b8b5cb6b1051bd89020d74303871fb Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.604765 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f4778f56f-vt7b4"] Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.800429 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f4778f56f-vt7b4" event={"ID":"bdc820cf-53f1-433e-99c7-c9e92f476e8f","Type":"ContainerStarted","Data":"0f8ca43cd8d16d9f29831aba69b804eb369fe18ee325eb2e7088564ed2ba92c0"} Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.801536 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" event={"ID":"097d6138-4a11-4545-bb6e-a61ea6cff7fb","Type":"ContainerStarted","Data":"ba94d8625454bffbab17f644235dc6372f1e3cddb59e42829632aeaf740109b0"} Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.802713 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gcvj4" event={"ID":"a6d2dc02-8689-4c6b-bde6-f9120db9f714","Type":"ContainerStarted","Data":"c7901977f73ad5835e6af51c9dfd92b819758a0371a1a4f50c9ec2176224d1ee"} Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.803758 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4qw9l" event={"ID":"3d1046ad-79df-4e1c-8c25-6af2a0379417","Type":"ContainerStarted","Data":"5fba84b38f592d8d5ff9626e18b2297be8c47cc1498be19dcbbb9664e2ede7cc"} Dec 04 06:21:56 crc kubenswrapper[4832]: I1204 06:21:56.804597 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" event={"ID":"85a05826-c1ab-484b-b658-051dc78add17","Type":"ContainerStarted","Data":"1cb56c1028a7c362218ef4198e7a06a3a9b8b5cb6b1051bd89020d74303871fb"} Dec 04 06:21:57 crc kubenswrapper[4832]: I1204 06:21:57.824639 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f4778f56f-vt7b4" event={"ID":"bdc820cf-53f1-433e-99c7-c9e92f476e8f","Type":"ContainerStarted","Data":"6a34f1e702477b0244c3b9eb8eae27b1c973ad080bfd670b91b55e3d793c2e5d"} Dec 04 06:21:57 crc kubenswrapper[4832]: I1204 06:21:57.856762 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5f4778f56f-vt7b4" podStartSLOduration=2.8567406220000002 podStartE2EDuration="2.856740622s" podCreationTimestamp="2025-12-04 06:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:21:57.848019164 +0000 UTC m=+773.460836880" watchObservedRunningTime="2025-12-04 06:21:57.856740622 +0000 UTC m=+773.469558348" Dec 04 06:22:00 crc kubenswrapper[4832]: I1204 06:22:00.846114 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" event={"ID":"85a05826-c1ab-484b-b658-051dc78add17","Type":"ContainerStarted","Data":"0f4deefee19145540c6d28a41707a7803c110bde633003cda384c87234d08f86"} Dec 04 06:22:00 crc kubenswrapper[4832]: I1204 06:22:00.847854 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" event={"ID":"097d6138-4a11-4545-bb6e-a61ea6cff7fb","Type":"ContainerStarted","Data":"8154c198676ac3672aad7540d67a9eed214553173d126ad0990b1e2bc04fa1ed"} Dec 04 06:22:00 crc kubenswrapper[4832]: I1204 06:22:00.847978 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" Dec 04 06:22:00 crc kubenswrapper[4832]: I1204 06:22:00.851648 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gcvj4" event={"ID":"a6d2dc02-8689-4c6b-bde6-f9120db9f714","Type":"ContainerStarted","Data":"aec8379cbf703b927e9a53780aa20e50a3518dc625174eedbe0252310bb48f41"} Dec 04 06:22:00 crc kubenswrapper[4832]: I1204 06:22:00.853963 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4qw9l" event={"ID":"3d1046ad-79df-4e1c-8c25-6af2a0379417","Type":"ContainerStarted","Data":"efd3f876c042996bc16aba63fd40cbb91f60aa8bb48b0e38936dcf795f56b156"} Dec 04 06:22:00 crc kubenswrapper[4832]: I1204 06:22:00.854514 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:22:00 crc kubenswrapper[4832]: I1204 06:22:00.863851 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-jfb7v" podStartSLOduration=2.4325139780000002 podStartE2EDuration="5.863714041s" podCreationTimestamp="2025-12-04 06:21:55 +0000 UTC" firstStartedPulling="2025-12-04 06:21:56.51435046 +0000 UTC m=+772.127168166" lastFinishedPulling="2025-12-04 06:21:59.945550513 +0000 UTC m=+775.558368229" observedRunningTime="2025-12-04 06:22:00.863438195 +0000 UTC m=+776.476255911" watchObservedRunningTime="2025-12-04 06:22:00.863714041 +0000 UTC m=+776.476531747" Dec 04 06:22:00 crc kubenswrapper[4832]: I1204 06:22:00.916647 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" podStartSLOduration=2.442787183 podStartE2EDuration="5.916629472s" podCreationTimestamp="2025-12-04 06:21:55 +0000 UTC" firstStartedPulling="2025-12-04 06:21:56.484597661 +0000 UTC m=+772.097415367" lastFinishedPulling="2025-12-04 06:21:59.95843994 +0000 UTC m=+775.571257656" observedRunningTime="2025-12-04 06:22:00.890358056 +0000 UTC m=+776.503175762" watchObservedRunningTime="2025-12-04 06:22:00.916629472 +0000 UTC m=+776.529447178" Dec 04 06:22:00 crc kubenswrapper[4832]: I1204 06:22:00.917313 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4qw9l" podStartSLOduration=2.078236551 podStartE2EDuration="5.917308188s" podCreationTimestamp="2025-12-04 06:21:55 +0000 UTC" firstStartedPulling="2025-12-04 06:21:56.11842775 +0000 UTC m=+771.731245456" lastFinishedPulling="2025-12-04 06:21:59.957499367 +0000 UTC m=+775.570317093" observedRunningTime="2025-12-04 06:22:00.912135855 +0000 UTC m=+776.524953581" watchObservedRunningTime="2025-12-04 06:22:00.917308188 +0000 UTC m=+776.530125894" Dec 04 06:22:03 crc kubenswrapper[4832]: I1204 06:22:03.875429 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gcvj4" event={"ID":"a6d2dc02-8689-4c6b-bde6-f9120db9f714","Type":"ContainerStarted","Data":"9a9d0a0b9377b7f5e7f0758d4694c3e209e11fb73dd29264906516efa99aabf3"} Dec 04 06:22:06 crc kubenswrapper[4832]: I1204 06:22:06.114060 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4qw9l" Dec 04 06:22:06 crc kubenswrapper[4832]: I1204 06:22:06.136225 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-gcvj4" podStartSLOduration=4.012572819 podStartE2EDuration="11.136206994s" podCreationTimestamp="2025-12-04 06:21:55 +0000 UTC" firstStartedPulling="2025-12-04 06:21:56.424531761 +0000 UTC m=+772.037349467" lastFinishedPulling="2025-12-04 06:22:03.548165936 +0000 UTC m=+779.160983642" observedRunningTime="2025-12-04 06:22:03.895706467 +0000 UTC m=+779.508524173" watchObservedRunningTime="2025-12-04 06:22:06.136206994 +0000 UTC m=+781.749024700" Dec 04 06:22:06 crc kubenswrapper[4832]: I1204 06:22:06.336143 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:22:06 crc kubenswrapper[4832]: I1204 06:22:06.336342 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:22:06 crc kubenswrapper[4832]: I1204 06:22:06.340942 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:22:06 crc kubenswrapper[4832]: I1204 06:22:06.897857 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5f4778f56f-vt7b4" Dec 04 06:22:06 crc kubenswrapper[4832]: I1204 06:22:06.943709 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g2thm"] Dec 04 06:22:16 crc kubenswrapper[4832]: I1204 06:22:16.070255 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6ntwk" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.157878 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48"] Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.159449 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.161331 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.171477 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48"] Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.287039 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxcx\" (UniqueName: \"kubernetes.io/projected/d06954e0-1987-4ea1-8573-f3232b1a8e7e-kube-api-access-8bxcx\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48\" (UID: \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.287623 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d06954e0-1987-4ea1-8573-f3232b1a8e7e-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48\" (UID: \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.287759 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d06954e0-1987-4ea1-8573-f3232b1a8e7e-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48\" (UID: \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.389239 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bxcx\" (UniqueName: \"kubernetes.io/projected/d06954e0-1987-4ea1-8573-f3232b1a8e7e-kube-api-access-8bxcx\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48\" (UID: \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.389322 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d06954e0-1987-4ea1-8573-f3232b1a8e7e-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48\" (UID: \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.389352 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d06954e0-1987-4ea1-8573-f3232b1a8e7e-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48\" (UID: \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.390068 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d06954e0-1987-4ea1-8573-f3232b1a8e7e-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48\" (UID: \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.390307 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d06954e0-1987-4ea1-8573-f3232b1a8e7e-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48\" (UID: \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.410876 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bxcx\" (UniqueName: \"kubernetes.io/projected/d06954e0-1987-4ea1-8573-f3232b1a8e7e-kube-api-access-8bxcx\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48\" (UID: \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.476054 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:29 crc kubenswrapper[4832]: I1204 06:22:29.970585 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48"] Dec 04 06:22:30 crc kubenswrapper[4832]: I1204 06:22:30.042816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" event={"ID":"d06954e0-1987-4ea1-8573-f3232b1a8e7e","Type":"ContainerStarted","Data":"0e4b7c8c5f4c7a7955275794161f64070c0fdef9e531f31026e3dd03fd2a20ec"} Dec 04 06:22:31 crc kubenswrapper[4832]: I1204 06:22:31.051159 4832 generic.go:334] "Generic (PLEG): container finished" podID="d06954e0-1987-4ea1-8573-f3232b1a8e7e" containerID="2c4fc0f22df3256e061ce5811e944e41c2bcbaf913cdd2cdc2a195ca95ceb2a8" exitCode=0 Dec 04 06:22:31 crc kubenswrapper[4832]: I1204 06:22:31.051544 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" event={"ID":"d06954e0-1987-4ea1-8573-f3232b1a8e7e","Type":"ContainerDied","Data":"2c4fc0f22df3256e061ce5811e944e41c2bcbaf913cdd2cdc2a195ca95ceb2a8"} Dec 04 06:22:31 crc kubenswrapper[4832]: I1204 06:22:31.998254 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-g2thm" podUID="50fb7e5f-0fc6-47d2-a953-8fece3489792" containerName="console" containerID="cri-o://a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b" gracePeriod=15 Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.436439 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g2thm_50fb7e5f-0fc6-47d2-a953-8fece3489792/console/0.log" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.436999 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.639193 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-config\") pod \"50fb7e5f-0fc6-47d2-a953-8fece3489792\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.639582 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-service-ca\") pod \"50fb7e5f-0fc6-47d2-a953-8fece3489792\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.639618 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-serving-cert\") pod \"50fb7e5f-0fc6-47d2-a953-8fece3489792\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.639640 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-trusted-ca-bundle\") pod \"50fb7e5f-0fc6-47d2-a953-8fece3489792\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.639706 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrmt8\" (UniqueName: \"kubernetes.io/projected/50fb7e5f-0fc6-47d2-a953-8fece3489792-kube-api-access-qrmt8\") pod \"50fb7e5f-0fc6-47d2-a953-8fece3489792\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.639742 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-oauth-config\") pod \"50fb7e5f-0fc6-47d2-a953-8fece3489792\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.639788 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-oauth-serving-cert\") pod \"50fb7e5f-0fc6-47d2-a953-8fece3489792\" (UID: \"50fb7e5f-0fc6-47d2-a953-8fece3489792\") " Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.640562 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "50fb7e5f-0fc6-47d2-a953-8fece3489792" (UID: "50fb7e5f-0fc6-47d2-a953-8fece3489792"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.640963 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-service-ca" (OuterVolumeSpecName: "service-ca") pod "50fb7e5f-0fc6-47d2-a953-8fece3489792" (UID: "50fb7e5f-0fc6-47d2-a953-8fece3489792"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.641427 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-config" (OuterVolumeSpecName: "console-config") pod "50fb7e5f-0fc6-47d2-a953-8fece3489792" (UID: "50fb7e5f-0fc6-47d2-a953-8fece3489792"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.642429 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "50fb7e5f-0fc6-47d2-a953-8fece3489792" (UID: "50fb7e5f-0fc6-47d2-a953-8fece3489792"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.646354 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "50fb7e5f-0fc6-47d2-a953-8fece3489792" (UID: "50fb7e5f-0fc6-47d2-a953-8fece3489792"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.646581 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "50fb7e5f-0fc6-47d2-a953-8fece3489792" (UID: "50fb7e5f-0fc6-47d2-a953-8fece3489792"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.647048 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50fb7e5f-0fc6-47d2-a953-8fece3489792-kube-api-access-qrmt8" (OuterVolumeSpecName: "kube-api-access-qrmt8") pod "50fb7e5f-0fc6-47d2-a953-8fece3489792" (UID: "50fb7e5f-0fc6-47d2-a953-8fece3489792"). InnerVolumeSpecName "kube-api-access-qrmt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.741674 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.741718 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.741729 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.741742 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrmt8\" (UniqueName: \"kubernetes.io/projected/50fb7e5f-0fc6-47d2-a953-8fece3489792-kube-api-access-qrmt8\") on node \"crc\" DevicePath \"\"" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.741753 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.741765 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 06:22:32 crc kubenswrapper[4832]: I1204 06:22:32.741776 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50fb7e5f-0fc6-47d2-a953-8fece3489792-console-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:22:33 crc kubenswrapper[4832]: I1204 06:22:33.068780 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g2thm_50fb7e5f-0fc6-47d2-a953-8fece3489792/console/0.log" Dec 04 06:22:33 crc kubenswrapper[4832]: I1204 06:22:33.068869 4832 generic.go:334] "Generic (PLEG): container finished" podID="50fb7e5f-0fc6-47d2-a953-8fece3489792" containerID="a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b" exitCode=2 Dec 04 06:22:33 crc kubenswrapper[4832]: I1204 06:22:33.068963 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g2thm" Dec 04 06:22:33 crc kubenswrapper[4832]: I1204 06:22:33.068979 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2thm" event={"ID":"50fb7e5f-0fc6-47d2-a953-8fece3489792","Type":"ContainerDied","Data":"a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b"} Dec 04 06:22:33 crc kubenswrapper[4832]: I1204 06:22:33.069043 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g2thm" event={"ID":"50fb7e5f-0fc6-47d2-a953-8fece3489792","Type":"ContainerDied","Data":"42c2e2247b6ae8c20ba42d020b1fa00e3e998f73f3a84d4a11b6396f2d98a26a"} Dec 04 06:22:33 crc kubenswrapper[4832]: I1204 06:22:33.069078 4832 scope.go:117] "RemoveContainer" containerID="a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b" Dec 04 06:22:33 crc kubenswrapper[4832]: I1204 06:22:33.072645 4832 generic.go:334] "Generic (PLEG): container finished" podID="d06954e0-1987-4ea1-8573-f3232b1a8e7e" containerID="4a70e5fb274c4c588f51741910b80c11f62d8f2aa81481c67bc313a080ee90a3" exitCode=0 Dec 04 06:22:33 crc kubenswrapper[4832]: I1204 06:22:33.072737 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" event={"ID":"d06954e0-1987-4ea1-8573-f3232b1a8e7e","Type":"ContainerDied","Data":"4a70e5fb274c4c588f51741910b80c11f62d8f2aa81481c67bc313a080ee90a3"} Dec 04 06:22:33 crc kubenswrapper[4832]: I1204 06:22:33.120128 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g2thm"] Dec 04 06:22:33 crc kubenswrapper[4832]: I1204 06:22:33.124021 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-g2thm"] Dec 04 06:22:33 crc kubenswrapper[4832]: I1204 06:22:33.127519 4832 scope.go:117] "RemoveContainer" containerID="a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b" Dec 04 06:22:33 crc kubenswrapper[4832]: E1204 06:22:33.128253 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b\": container with ID starting with a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b not found: ID does not exist" containerID="a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b" Dec 04 06:22:33 crc kubenswrapper[4832]: I1204 06:22:33.128280 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b"} err="failed to get container status \"a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b\": rpc error: code = NotFound desc = could not find container \"a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b\": container with ID starting with a878c6a05488e66ab6c3b7ad669719bda85660cad40232a81e1a4ecfb6d4e10b not found: ID does not exist" Dec 04 06:22:34 crc kubenswrapper[4832]: I1204 06:22:34.094041 4832 generic.go:334] "Generic (PLEG): container finished" podID="d06954e0-1987-4ea1-8573-f3232b1a8e7e" containerID="f0b219b86710c03ed281e110e873ca5722e42654d12cf680df479860b0820cd9" exitCode=0 Dec 04 06:22:34 crc kubenswrapper[4832]: I1204 06:22:34.094103 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" event={"ID":"d06954e0-1987-4ea1-8573-f3232b1a8e7e","Type":"ContainerDied","Data":"f0b219b86710c03ed281e110e873ca5722e42654d12cf680df479860b0820cd9"} Dec 04 06:22:34 crc kubenswrapper[4832]: I1204 06:22:34.719696 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50fb7e5f-0fc6-47d2-a953-8fece3489792" path="/var/lib/kubelet/pods/50fb7e5f-0fc6-47d2-a953-8fece3489792/volumes" Dec 04 06:22:35 crc kubenswrapper[4832]: I1204 06:22:35.339151 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:35 crc kubenswrapper[4832]: I1204 06:22:35.481828 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d06954e0-1987-4ea1-8573-f3232b1a8e7e-util\") pod \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\" (UID: \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\") " Dec 04 06:22:35 crc kubenswrapper[4832]: I1204 06:22:35.481908 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bxcx\" (UniqueName: \"kubernetes.io/projected/d06954e0-1987-4ea1-8573-f3232b1a8e7e-kube-api-access-8bxcx\") pod \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\" (UID: \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\") " Dec 04 06:22:35 crc kubenswrapper[4832]: I1204 06:22:35.481989 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d06954e0-1987-4ea1-8573-f3232b1a8e7e-bundle\") pod \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\" (UID: \"d06954e0-1987-4ea1-8573-f3232b1a8e7e\") " Dec 04 06:22:35 crc kubenswrapper[4832]: I1204 06:22:35.483027 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d06954e0-1987-4ea1-8573-f3232b1a8e7e-bundle" (OuterVolumeSpecName: "bundle") pod "d06954e0-1987-4ea1-8573-f3232b1a8e7e" (UID: "d06954e0-1987-4ea1-8573-f3232b1a8e7e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:22:35 crc kubenswrapper[4832]: I1204 06:22:35.492017 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d06954e0-1987-4ea1-8573-f3232b1a8e7e-kube-api-access-8bxcx" (OuterVolumeSpecName: "kube-api-access-8bxcx") pod "d06954e0-1987-4ea1-8573-f3232b1a8e7e" (UID: "d06954e0-1987-4ea1-8573-f3232b1a8e7e"). InnerVolumeSpecName "kube-api-access-8bxcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:22:35 crc kubenswrapper[4832]: I1204 06:22:35.496092 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d06954e0-1987-4ea1-8573-f3232b1a8e7e-util" (OuterVolumeSpecName: "util") pod "d06954e0-1987-4ea1-8573-f3232b1a8e7e" (UID: "d06954e0-1987-4ea1-8573-f3232b1a8e7e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:22:35 crc kubenswrapper[4832]: I1204 06:22:35.583541 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d06954e0-1987-4ea1-8573-f3232b1a8e7e-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:22:35 crc kubenswrapper[4832]: I1204 06:22:35.583581 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d06954e0-1987-4ea1-8573-f3232b1a8e7e-util\") on node \"crc\" DevicePath \"\"" Dec 04 06:22:35 crc kubenswrapper[4832]: I1204 06:22:35.583594 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bxcx\" (UniqueName: \"kubernetes.io/projected/d06954e0-1987-4ea1-8573-f3232b1a8e7e-kube-api-access-8bxcx\") on node \"crc\" DevicePath \"\"" Dec 04 06:22:36 crc kubenswrapper[4832]: I1204 06:22:36.111515 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" event={"ID":"d06954e0-1987-4ea1-8573-f3232b1a8e7e","Type":"ContainerDied","Data":"0e4b7c8c5f4c7a7955275794161f64070c0fdef9e531f31026e3dd03fd2a20ec"} Dec 04 06:22:36 crc kubenswrapper[4832]: I1204 06:22:36.111571 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e4b7c8c5f4c7a7955275794161f64070c0fdef9e531f31026e3dd03fd2a20ec" Dec 04 06:22:36 crc kubenswrapper[4832]: I1204 06:22:36.111613 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.304985 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv"] Dec 04 06:22:45 crc kubenswrapper[4832]: E1204 06:22:45.305789 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06954e0-1987-4ea1-8573-f3232b1a8e7e" containerName="pull" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.305803 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06954e0-1987-4ea1-8573-f3232b1a8e7e" containerName="pull" Dec 04 06:22:45 crc kubenswrapper[4832]: E1204 06:22:45.305814 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50fb7e5f-0fc6-47d2-a953-8fece3489792" containerName="console" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.305820 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="50fb7e5f-0fc6-47d2-a953-8fece3489792" containerName="console" Dec 04 06:22:45 crc kubenswrapper[4832]: E1204 06:22:45.305831 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06954e0-1987-4ea1-8573-f3232b1a8e7e" containerName="extract" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.305838 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06954e0-1987-4ea1-8573-f3232b1a8e7e" containerName="extract" Dec 04 06:22:45 crc kubenswrapper[4832]: E1204 06:22:45.305848 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06954e0-1987-4ea1-8573-f3232b1a8e7e" containerName="util" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.305855 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06954e0-1987-4ea1-8573-f3232b1a8e7e" containerName="util" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.305950 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d06954e0-1987-4ea1-8573-f3232b1a8e7e" containerName="extract" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.305964 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="50fb7e5f-0fc6-47d2-a953-8fece3489792" containerName="console" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.306450 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.308433 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.308508 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.309028 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.310991 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.316071 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-nk2fk" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.333254 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59a7f669-83c5-454f-a192-94642ab2fe06-webhook-cert\") pod \"metallb-operator-controller-manager-5dff6547bc-rp4gv\" (UID: \"59a7f669-83c5-454f-a192-94642ab2fe06\") " pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.333328 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh4jx\" (UniqueName: \"kubernetes.io/projected/59a7f669-83c5-454f-a192-94642ab2fe06-kube-api-access-sh4jx\") pod \"metallb-operator-controller-manager-5dff6547bc-rp4gv\" (UID: \"59a7f669-83c5-454f-a192-94642ab2fe06\") " pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.333522 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59a7f669-83c5-454f-a192-94642ab2fe06-apiservice-cert\") pod \"metallb-operator-controller-manager-5dff6547bc-rp4gv\" (UID: \"59a7f669-83c5-454f-a192-94642ab2fe06\") " pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.379664 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv"] Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.434559 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh4jx\" (UniqueName: \"kubernetes.io/projected/59a7f669-83c5-454f-a192-94642ab2fe06-kube-api-access-sh4jx\") pod \"metallb-operator-controller-manager-5dff6547bc-rp4gv\" (UID: \"59a7f669-83c5-454f-a192-94642ab2fe06\") " pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.434652 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59a7f669-83c5-454f-a192-94642ab2fe06-apiservice-cert\") pod \"metallb-operator-controller-manager-5dff6547bc-rp4gv\" (UID: \"59a7f669-83c5-454f-a192-94642ab2fe06\") " pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.434696 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59a7f669-83c5-454f-a192-94642ab2fe06-webhook-cert\") pod \"metallb-operator-controller-manager-5dff6547bc-rp4gv\" (UID: \"59a7f669-83c5-454f-a192-94642ab2fe06\") " pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.443310 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59a7f669-83c5-454f-a192-94642ab2fe06-webhook-cert\") pod \"metallb-operator-controller-manager-5dff6547bc-rp4gv\" (UID: \"59a7f669-83c5-454f-a192-94642ab2fe06\") " pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.444941 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59a7f669-83c5-454f-a192-94642ab2fe06-apiservice-cert\") pod \"metallb-operator-controller-manager-5dff6547bc-rp4gv\" (UID: \"59a7f669-83c5-454f-a192-94642ab2fe06\") " pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.458257 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh4jx\" (UniqueName: \"kubernetes.io/projected/59a7f669-83c5-454f-a192-94642ab2fe06-kube-api-access-sh4jx\") pod \"metallb-operator-controller-manager-5dff6547bc-rp4gv\" (UID: \"59a7f669-83c5-454f-a192-94642ab2fe06\") " pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.624435 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.671787 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4"] Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.672644 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.674842 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5twpk" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.675377 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.678187 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.699298 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4"] Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.738326 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c20405b-b33f-49ad-a10f-a9b32a3d320b-apiservice-cert\") pod \"metallb-operator-webhook-server-5f45496cc4-r8fz4\" (UID: \"6c20405b-b33f-49ad-a10f-a9b32a3d320b\") " pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.738404 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c20405b-b33f-49ad-a10f-a9b32a3d320b-webhook-cert\") pod \"metallb-operator-webhook-server-5f45496cc4-r8fz4\" (UID: \"6c20405b-b33f-49ad-a10f-a9b32a3d320b\") " pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.738562 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5d2s\" (UniqueName: \"kubernetes.io/projected/6c20405b-b33f-49ad-a10f-a9b32a3d320b-kube-api-access-d5d2s\") pod \"metallb-operator-webhook-server-5f45496cc4-r8fz4\" (UID: \"6c20405b-b33f-49ad-a10f-a9b32a3d320b\") " pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.839287 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c20405b-b33f-49ad-a10f-a9b32a3d320b-apiservice-cert\") pod \"metallb-operator-webhook-server-5f45496cc4-r8fz4\" (UID: \"6c20405b-b33f-49ad-a10f-a9b32a3d320b\") " pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.839365 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c20405b-b33f-49ad-a10f-a9b32a3d320b-webhook-cert\") pod \"metallb-operator-webhook-server-5f45496cc4-r8fz4\" (UID: \"6c20405b-b33f-49ad-a10f-a9b32a3d320b\") " pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.839444 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5d2s\" (UniqueName: \"kubernetes.io/projected/6c20405b-b33f-49ad-a10f-a9b32a3d320b-kube-api-access-d5d2s\") pod \"metallb-operator-webhook-server-5f45496cc4-r8fz4\" (UID: \"6c20405b-b33f-49ad-a10f-a9b32a3d320b\") " pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.844489 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6c20405b-b33f-49ad-a10f-a9b32a3d320b-apiservice-cert\") pod \"metallb-operator-webhook-server-5f45496cc4-r8fz4\" (UID: \"6c20405b-b33f-49ad-a10f-a9b32a3d320b\") " pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.845560 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6c20405b-b33f-49ad-a10f-a9b32a3d320b-webhook-cert\") pod \"metallb-operator-webhook-server-5f45496cc4-r8fz4\" (UID: \"6c20405b-b33f-49ad-a10f-a9b32a3d320b\") " pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:22:45 crc kubenswrapper[4832]: I1204 06:22:45.865031 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5d2s\" (UniqueName: \"kubernetes.io/projected/6c20405b-b33f-49ad-a10f-a9b32a3d320b-kube-api-access-d5d2s\") pod \"metallb-operator-webhook-server-5f45496cc4-r8fz4\" (UID: \"6c20405b-b33f-49ad-a10f-a9b32a3d320b\") " pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:22:46 crc kubenswrapper[4832]: I1204 06:22:46.032958 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:22:46 crc kubenswrapper[4832]: I1204 06:22:46.867028 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv"] Dec 04 06:22:46 crc kubenswrapper[4832]: W1204 06:22:46.879221 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59a7f669_83c5_454f_a192_94642ab2fe06.slice/crio-7b5638e6fd16e74fc7e18551ab0812ec6b02596cbfea8b8fda30c8b6330af95c WatchSource:0}: Error finding container 7b5638e6fd16e74fc7e18551ab0812ec6b02596cbfea8b8fda30c8b6330af95c: Status 404 returned error can't find the container with id 7b5638e6fd16e74fc7e18551ab0812ec6b02596cbfea8b8fda30c8b6330af95c Dec 04 06:22:46 crc kubenswrapper[4832]: I1204 06:22:46.935077 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4"] Dec 04 06:22:46 crc kubenswrapper[4832]: W1204 06:22:46.937322 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c20405b_b33f_49ad_a10f_a9b32a3d320b.slice/crio-5bdd2456f13cd09ef534e88399a436e9554474af2162a5e480faef9688d6b591 WatchSource:0}: Error finding container 5bdd2456f13cd09ef534e88399a436e9554474af2162a5e480faef9688d6b591: Status 404 returned error can't find the container with id 5bdd2456f13cd09ef534e88399a436e9554474af2162a5e480faef9688d6b591 Dec 04 06:22:47 crc kubenswrapper[4832]: I1204 06:22:47.198623 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" event={"ID":"59a7f669-83c5-454f-a192-94642ab2fe06","Type":"ContainerStarted","Data":"7b5638e6fd16e74fc7e18551ab0812ec6b02596cbfea8b8fda30c8b6330af95c"} Dec 04 06:22:47 crc kubenswrapper[4832]: I1204 06:22:47.199972 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" event={"ID":"6c20405b-b33f-49ad-a10f-a9b32a3d320b","Type":"ContainerStarted","Data":"5bdd2456f13cd09ef534e88399a436e9554474af2162a5e480faef9688d6b591"} Dec 04 06:22:55 crc kubenswrapper[4832]: I1204 06:22:55.253685 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" event={"ID":"6c20405b-b33f-49ad-a10f-a9b32a3d320b","Type":"ContainerStarted","Data":"5a2f654f742fc66d24e56cb0ecba514c94d4a9ceba21e7a184c66bb554d410b3"} Dec 04 06:22:55 crc kubenswrapper[4832]: I1204 06:22:55.254070 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:22:55 crc kubenswrapper[4832]: I1204 06:22:55.256502 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" event={"ID":"59a7f669-83c5-454f-a192-94642ab2fe06","Type":"ContainerStarted","Data":"a89c7a21375c97bc8ff53affb49a431e8b572e3f74781bef16fb6f4e7c86bf42"} Dec 04 06:22:55 crc kubenswrapper[4832]: I1204 06:22:55.256628 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:22:55 crc kubenswrapper[4832]: I1204 06:22:55.272730 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" podStartSLOduration=2.483784638 podStartE2EDuration="10.272713116s" podCreationTimestamp="2025-12-04 06:22:45 +0000 UTC" firstStartedPulling="2025-12-04 06:22:46.94089912 +0000 UTC m=+822.553716826" lastFinishedPulling="2025-12-04 06:22:54.729827598 +0000 UTC m=+830.342645304" observedRunningTime="2025-12-04 06:22:55.267713912 +0000 UTC m=+830.880531618" watchObservedRunningTime="2025-12-04 06:22:55.272713116 +0000 UTC m=+830.885530822" Dec 04 06:22:55 crc kubenswrapper[4832]: I1204 06:22:55.291349 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" podStartSLOduration=2.463717074 podStartE2EDuration="10.291319753s" podCreationTimestamp="2025-12-04 06:22:45 +0000 UTC" firstStartedPulling="2025-12-04 06:22:46.881984291 +0000 UTC m=+822.494801997" lastFinishedPulling="2025-12-04 06:22:54.70958697 +0000 UTC m=+830.322404676" observedRunningTime="2025-12-04 06:22:55.289817706 +0000 UTC m=+830.902635432" watchObservedRunningTime="2025-12-04 06:22:55.291319753 +0000 UTC m=+830.904137449" Dec 04 06:23:06 crc kubenswrapper[4832]: I1204 06:23:06.038688 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5f45496cc4-r8fz4" Dec 04 06:23:25 crc kubenswrapper[4832]: I1204 06:23:25.627329 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5dff6547bc-rp4gv" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.371559 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9wd74"] Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.374703 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.377617 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.378198 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.378281 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-jsh8m" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.380043 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6"] Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.381110 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.382688 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.387815 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd68r\" (UniqueName: \"kubernetes.io/projected/9e61f6af-2150-458f-9ace-ce824ac50448-kube-api-access-gd68r\") pod \"frr-k8s-webhook-server-7fcb986d4-9h2b6\" (UID: \"9e61f6af-2150-458f-9ace-ce824ac50448\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.387963 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-metrics\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.388018 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-reloader\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.388114 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-frr-startup\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.388228 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9v7w\" (UniqueName: \"kubernetes.io/projected/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-kube-api-access-k9v7w\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.388302 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-metrics-certs\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.388492 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-frr-sockets\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.388564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-frr-conf\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.388628 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e61f6af-2150-458f-9ace-ce824ac50448-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9h2b6\" (UID: \"9e61f6af-2150-458f-9ace-ce824ac50448\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.394852 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6"] Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.483226 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-cwbkx"] Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.484331 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cwbkx" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.490433 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-frr-conf\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.490501 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e61f6af-2150-458f-9ace-ce824ac50448-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9h2b6\" (UID: \"9e61f6af-2150-458f-9ace-ce824ac50448\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.490558 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd68r\" (UniqueName: \"kubernetes.io/projected/9e61f6af-2150-458f-9ace-ce824ac50448-kube-api-access-gd68r\") pod \"frr-k8s-webhook-server-7fcb986d4-9h2b6\" (UID: \"9e61f6af-2150-458f-9ace-ce824ac50448\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.491250 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-frr-conf\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: E1204 06:23:26.491463 4832 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 04 06:23:26 crc kubenswrapper[4832]: E1204 06:23:26.491523 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e61f6af-2150-458f-9ace-ce824ac50448-cert podName:9e61f6af-2150-458f-9ace-ce824ac50448 nodeName:}" failed. No retries permitted until 2025-12-04 06:23:26.991504115 +0000 UTC m=+862.604321821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9e61f6af-2150-458f-9ace-ce824ac50448-cert") pod "frr-k8s-webhook-server-7fcb986d4-9h2b6" (UID: "9e61f6af-2150-458f-9ace-ce824ac50448") : secret "frr-k8s-webhook-server-cert" not found Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.491775 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-metrics\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.491818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-reloader\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.491858 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-frr-startup\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.491882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9v7w\" (UniqueName: \"kubernetes.io/projected/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-kube-api-access-k9v7w\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.491923 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-metrics-certs\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.491968 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-frr-sockets\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.492249 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-frr-sockets\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.492640 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-metrics\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.492812 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.492873 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-reloader\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.492998 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.493020 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 04 06:23:26 crc kubenswrapper[4832]: E1204 06:23:26.493413 4832 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 04 06:23:26 crc kubenswrapper[4832]: E1204 06:23:26.493487 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-metrics-certs podName:e4f68a6a-9df0-4ad3-bb51-b662bfb994e9 nodeName:}" failed. No retries permitted until 2025-12-04 06:23:26.993463453 +0000 UTC m=+862.606281149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-metrics-certs") pod "frr-k8s-9wd74" (UID: "e4f68a6a-9df0-4ad3-bb51-b662bfb994e9") : secret "frr-k8s-certs-secret" not found Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.493771 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-frr-startup\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.498286 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-w4n4q" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.501260 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-gclzl"] Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.502227 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.506786 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.523488 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd68r\" (UniqueName: \"kubernetes.io/projected/9e61f6af-2150-458f-9ace-ce824ac50448-kube-api-access-gd68r\") pod \"frr-k8s-webhook-server-7fcb986d4-9h2b6\" (UID: \"9e61f6af-2150-458f-9ace-ce824ac50448\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.528111 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9v7w\" (UniqueName: \"kubernetes.io/projected/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-kube-api-access-k9v7w\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.532848 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-gclzl"] Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.593152 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppqgs\" (UniqueName: \"kubernetes.io/projected/3a0011d7-d649-42fa-bd27-b98eb4a958a3-kube-api-access-ppqgs\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.593722 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3a0011d7-d649-42fa-bd27-b98eb4a958a3-memberlist\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.593768 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a0011d7-d649-42fa-bd27-b98eb4a958a3-metrics-certs\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.593790 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3a0011d7-d649-42fa-bd27-b98eb4a958a3-metallb-excludel2\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.695458 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1b7280c-f3d1-4f5b-9f14-bf413e597077-metrics-certs\") pod \"controller-f8648f98b-gclzl\" (UID: \"a1b7280c-f3d1-4f5b-9f14-bf413e597077\") " pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.695541 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1b7280c-f3d1-4f5b-9f14-bf413e597077-cert\") pod \"controller-f8648f98b-gclzl\" (UID: \"a1b7280c-f3d1-4f5b-9f14-bf413e597077\") " pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.695584 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3a0011d7-d649-42fa-bd27-b98eb4a958a3-memberlist\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:26 crc kubenswrapper[4832]: E1204 06:23:26.695822 4832 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.695816 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a0011d7-d649-42fa-bd27-b98eb4a958a3-metrics-certs\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:26 crc kubenswrapper[4832]: E1204 06:23:26.695910 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a0011d7-d649-42fa-bd27-b98eb4a958a3-memberlist podName:3a0011d7-d649-42fa-bd27-b98eb4a958a3 nodeName:}" failed. No retries permitted until 2025-12-04 06:23:27.195880259 +0000 UTC m=+862.808697965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3a0011d7-d649-42fa-bd27-b98eb4a958a3-memberlist") pod "speaker-cwbkx" (UID: "3a0011d7-d649-42fa-bd27-b98eb4a958a3") : secret "metallb-memberlist" not found Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.695930 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3a0011d7-d649-42fa-bd27-b98eb4a958a3-metallb-excludel2\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.696004 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk8lg\" (UniqueName: \"kubernetes.io/projected/a1b7280c-f3d1-4f5b-9f14-bf413e597077-kube-api-access-xk8lg\") pod \"controller-f8648f98b-gclzl\" (UID: \"a1b7280c-f3d1-4f5b-9f14-bf413e597077\") " pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.696256 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppqgs\" (UniqueName: \"kubernetes.io/projected/3a0011d7-d649-42fa-bd27-b98eb4a958a3-kube-api-access-ppqgs\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.696999 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3a0011d7-d649-42fa-bd27-b98eb4a958a3-metallb-excludel2\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.702524 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a0011d7-d649-42fa-bd27-b98eb4a958a3-metrics-certs\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.720741 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppqgs\" (UniqueName: \"kubernetes.io/projected/3a0011d7-d649-42fa-bd27-b98eb4a958a3-kube-api-access-ppqgs\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.797331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk8lg\" (UniqueName: \"kubernetes.io/projected/a1b7280c-f3d1-4f5b-9f14-bf413e597077-kube-api-access-xk8lg\") pod \"controller-f8648f98b-gclzl\" (UID: \"a1b7280c-f3d1-4f5b-9f14-bf413e597077\") " pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.797454 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1b7280c-f3d1-4f5b-9f14-bf413e597077-metrics-certs\") pod \"controller-f8648f98b-gclzl\" (UID: \"a1b7280c-f3d1-4f5b-9f14-bf413e597077\") " pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.797480 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1b7280c-f3d1-4f5b-9f14-bf413e597077-cert\") pod \"controller-f8648f98b-gclzl\" (UID: \"a1b7280c-f3d1-4f5b-9f14-bf413e597077\") " pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.799161 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.804284 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a1b7280c-f3d1-4f5b-9f14-bf413e597077-metrics-certs\") pod \"controller-f8648f98b-gclzl\" (UID: \"a1b7280c-f3d1-4f5b-9f14-bf413e597077\") " pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.810847 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1b7280c-f3d1-4f5b-9f14-bf413e597077-cert\") pod \"controller-f8648f98b-gclzl\" (UID: \"a1b7280c-f3d1-4f5b-9f14-bf413e597077\") " pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.812621 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk8lg\" (UniqueName: \"kubernetes.io/projected/a1b7280c-f3d1-4f5b-9f14-bf413e597077-kube-api-access-xk8lg\") pod \"controller-f8648f98b-gclzl\" (UID: \"a1b7280c-f3d1-4f5b-9f14-bf413e597077\") " pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:26 crc kubenswrapper[4832]: I1204 06:23:26.871156 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:27 crc kubenswrapper[4832]: I1204 06:23:27.000732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e61f6af-2150-458f-9ace-ce824ac50448-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9h2b6\" (UID: \"9e61f6af-2150-458f-9ace-ce824ac50448\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" Dec 04 06:23:27 crc kubenswrapper[4832]: I1204 06:23:27.000897 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-metrics-certs\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:27 crc kubenswrapper[4832]: I1204 06:23:27.009525 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4f68a6a-9df0-4ad3-bb51-b662bfb994e9-metrics-certs\") pod \"frr-k8s-9wd74\" (UID: \"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9\") " pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:27 crc kubenswrapper[4832]: I1204 06:23:27.009525 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9e61f6af-2150-458f-9ace-ce824ac50448-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9h2b6\" (UID: \"9e61f6af-2150-458f-9ace-ce824ac50448\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" Dec 04 06:23:27 crc kubenswrapper[4832]: I1204 06:23:27.044207 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:27 crc kubenswrapper[4832]: I1204 06:23:27.070796 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" Dec 04 06:23:27 crc kubenswrapper[4832]: I1204 06:23:27.203786 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3a0011d7-d649-42fa-bd27-b98eb4a958a3-memberlist\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:27 crc kubenswrapper[4832]: E1204 06:23:27.203972 4832 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 04 06:23:27 crc kubenswrapper[4832]: E1204 06:23:27.204071 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a0011d7-d649-42fa-bd27-b98eb4a958a3-memberlist podName:3a0011d7-d649-42fa-bd27-b98eb4a958a3 nodeName:}" failed. No retries permitted until 2025-12-04 06:23:28.204045399 +0000 UTC m=+863.816863105 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3a0011d7-d649-42fa-bd27-b98eb4a958a3-memberlist") pod "speaker-cwbkx" (UID: "3a0011d7-d649-42fa-bd27-b98eb4a958a3") : secret "metallb-memberlist" not found Dec 04 06:23:27 crc kubenswrapper[4832]: I1204 06:23:27.349594 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-gclzl"] Dec 04 06:23:27 crc kubenswrapper[4832]: I1204 06:23:27.438162 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6"] Dec 04 06:23:27 crc kubenswrapper[4832]: I1204 06:23:27.448601 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-gclzl" event={"ID":"a1b7280c-f3d1-4f5b-9f14-bf413e597077","Type":"ContainerStarted","Data":"616c6d38cd97bddd2f051ac1b41efdaaf3e49aef783fae3f3f57df41380657aa"} Dec 04 06:23:27 crc kubenswrapper[4832]: I1204 06:23:27.449278 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wd74" event={"ID":"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9","Type":"ContainerStarted","Data":"0906340828c39299cda015299c8ee5161cffd009dd4d72aa48bd292c90db5f78"} Dec 04 06:23:28 crc kubenswrapper[4832]: I1204 06:23:28.216683 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3a0011d7-d649-42fa-bd27-b98eb4a958a3-memberlist\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:28 crc kubenswrapper[4832]: I1204 06:23:28.225823 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3a0011d7-d649-42fa-bd27-b98eb4a958a3-memberlist\") pod \"speaker-cwbkx\" (UID: \"3a0011d7-d649-42fa-bd27-b98eb4a958a3\") " pod="metallb-system/speaker-cwbkx" Dec 04 06:23:28 crc kubenswrapper[4832]: I1204 06:23:28.308522 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cwbkx" Dec 04 06:23:28 crc kubenswrapper[4832]: I1204 06:23:28.458059 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cwbkx" event={"ID":"3a0011d7-d649-42fa-bd27-b98eb4a958a3","Type":"ContainerStarted","Data":"83dd15303aa13cf2f0fcc9a7f9f992b30c118298b4538eb5471be9fc8e95c113"} Dec 04 06:23:28 crc kubenswrapper[4832]: I1204 06:23:28.464891 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" event={"ID":"9e61f6af-2150-458f-9ace-ce824ac50448","Type":"ContainerStarted","Data":"199d0537e52edc00e14d1e2e672912b28c4a15c8dcaa83260bc1dae86db1603f"} Dec 04 06:23:28 crc kubenswrapper[4832]: I1204 06:23:28.470530 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-gclzl" event={"ID":"a1b7280c-f3d1-4f5b-9f14-bf413e597077","Type":"ContainerStarted","Data":"d332f2a8549af7dc153717bbeb7a65d4d80a22b82c20e381df9e31d9f6611aad"} Dec 04 06:23:28 crc kubenswrapper[4832]: I1204 06:23:28.470595 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-gclzl" event={"ID":"a1b7280c-f3d1-4f5b-9f14-bf413e597077","Type":"ContainerStarted","Data":"0bf5e6f3211cbd664d10ba3a5e670bf5c49a8e5cfb999be29180690ab4d6c4bc"} Dec 04 06:23:28 crc kubenswrapper[4832]: I1204 06:23:28.470775 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:28 crc kubenswrapper[4832]: I1204 06:23:28.491478 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-gclzl" podStartSLOduration=2.49145506 podStartE2EDuration="2.49145506s" podCreationTimestamp="2025-12-04 06:23:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:23:28.487718227 +0000 UTC m=+864.100535933" watchObservedRunningTime="2025-12-04 06:23:28.49145506 +0000 UTC m=+864.104272766" Dec 04 06:23:29 crc kubenswrapper[4832]: I1204 06:23:29.572320 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cwbkx" event={"ID":"3a0011d7-d649-42fa-bd27-b98eb4a958a3","Type":"ContainerStarted","Data":"060e4f9354a39a1584ad554a630f6a6a6639ef41fbbb1f9a9c9750cccc19fdc5"} Dec 04 06:23:29 crc kubenswrapper[4832]: I1204 06:23:29.572831 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-cwbkx" Dec 04 06:23:29 crc kubenswrapper[4832]: I1204 06:23:29.572849 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cwbkx" event={"ID":"3a0011d7-d649-42fa-bd27-b98eb4a958a3","Type":"ContainerStarted","Data":"1babc27b9154c562153e0a03a0fc505ed539bbe411c70643d263f5c24b4dc4c5"} Dec 04 06:23:29 crc kubenswrapper[4832]: I1204 06:23:29.651151 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-cwbkx" podStartSLOduration=3.65112121 podStartE2EDuration="3.65112121s" podCreationTimestamp="2025-12-04 06:23:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:23:29.643358937 +0000 UTC m=+865.256176643" watchObservedRunningTime="2025-12-04 06:23:29.65112121 +0000 UTC m=+865.263938916" Dec 04 06:23:35 crc kubenswrapper[4832]: I1204 06:23:35.362541 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:23:35 crc kubenswrapper[4832]: I1204 06:23:35.363151 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:23:38 crc kubenswrapper[4832]: I1204 06:23:38.312543 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-cwbkx" Dec 04 06:23:38 crc kubenswrapper[4832]: I1204 06:23:38.673182 4832 generic.go:334] "Generic (PLEG): container finished" podID="e4f68a6a-9df0-4ad3-bb51-b662bfb994e9" containerID="2036b388955cca71b1c85add84fb5a414bdc39a73e506fa0d76c217d3cb26198" exitCode=0 Dec 04 06:23:38 crc kubenswrapper[4832]: I1204 06:23:38.673237 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wd74" event={"ID":"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9","Type":"ContainerDied","Data":"2036b388955cca71b1c85add84fb5a414bdc39a73e506fa0d76c217d3cb26198"} Dec 04 06:23:38 crc kubenswrapper[4832]: I1204 06:23:38.674908 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" event={"ID":"9e61f6af-2150-458f-9ace-ce824ac50448","Type":"ContainerStarted","Data":"b3eed93d7c3c0595eb5a4cdfc68f2dd137a194328e4f61d69297e5bb0177fb6f"} Dec 04 06:23:38 crc kubenswrapper[4832]: I1204 06:23:38.675109 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" Dec 04 06:23:38 crc kubenswrapper[4832]: I1204 06:23:38.717854 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" podStartSLOduration=2.036248501 podStartE2EDuration="12.717834605s" podCreationTimestamp="2025-12-04 06:23:26 +0000 UTC" firstStartedPulling="2025-12-04 06:23:27.471824386 +0000 UTC m=+863.084642082" lastFinishedPulling="2025-12-04 06:23:38.15341048 +0000 UTC m=+873.766228186" observedRunningTime="2025-12-04 06:23:38.714917092 +0000 UTC m=+874.327734798" watchObservedRunningTime="2025-12-04 06:23:38.717834605 +0000 UTC m=+874.330652311" Dec 04 06:23:39 crc kubenswrapper[4832]: I1204 06:23:39.683752 4832 generic.go:334] "Generic (PLEG): container finished" podID="e4f68a6a-9df0-4ad3-bb51-b662bfb994e9" containerID="61467b6223192f43a1402f497ea851d45fc98f35be194a957e35fdc8d249127a" exitCode=0 Dec 04 06:23:39 crc kubenswrapper[4832]: I1204 06:23:39.683836 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wd74" event={"ID":"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9","Type":"ContainerDied","Data":"61467b6223192f43a1402f497ea851d45fc98f35be194a957e35fdc8d249127a"} Dec 04 06:23:40 crc kubenswrapper[4832]: I1204 06:23:40.694405 4832 generic.go:334] "Generic (PLEG): container finished" podID="e4f68a6a-9df0-4ad3-bb51-b662bfb994e9" containerID="925979b72cbc3cd4ba41a774d42fec1c5294f521e97fee7ca3c4e61f99788981" exitCode=0 Dec 04 06:23:40 crc kubenswrapper[4832]: I1204 06:23:40.694451 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wd74" event={"ID":"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9","Type":"ContainerDied","Data":"925979b72cbc3cd4ba41a774d42fec1c5294f521e97fee7ca3c4e61f99788981"} Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.049193 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nbglj"] Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.049943 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nbglj" Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.052085 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-tl5cp" Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.052452 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.054098 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.056810 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nbglj"] Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.190675 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd9wl\" (UniqueName: \"kubernetes.io/projected/f1b56287-16d8-4d37-b202-06a57da39fc1-kube-api-access-rd9wl\") pod \"openstack-operator-index-nbglj\" (UID: \"f1b56287-16d8-4d37-b202-06a57da39fc1\") " pod="openstack-operators/openstack-operator-index-nbglj" Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.325236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd9wl\" (UniqueName: \"kubernetes.io/projected/f1b56287-16d8-4d37-b202-06a57da39fc1-kube-api-access-rd9wl\") pod \"openstack-operator-index-nbglj\" (UID: \"f1b56287-16d8-4d37-b202-06a57da39fc1\") " pod="openstack-operators/openstack-operator-index-nbglj" Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.348345 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd9wl\" (UniqueName: \"kubernetes.io/projected/f1b56287-16d8-4d37-b202-06a57da39fc1-kube-api-access-rd9wl\") pod \"openstack-operator-index-nbglj\" (UID: \"f1b56287-16d8-4d37-b202-06a57da39fc1\") " pod="openstack-operators/openstack-operator-index-nbglj" Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.371251 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nbglj" Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.570982 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nbglj"] Dec 04 06:23:41 crc kubenswrapper[4832]: W1204 06:23:41.576141 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1b56287_16d8_4d37_b202_06a57da39fc1.slice/crio-7cba226634c0c204c7237ef81eb4099d0c8caa5d5108b225a537108bcd1faa23 WatchSource:0}: Error finding container 7cba226634c0c204c7237ef81eb4099d0c8caa5d5108b225a537108bcd1faa23: Status 404 returned error can't find the container with id 7cba226634c0c204c7237ef81eb4099d0c8caa5d5108b225a537108bcd1faa23 Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.703256 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nbglj" event={"ID":"f1b56287-16d8-4d37-b202-06a57da39fc1","Type":"ContainerStarted","Data":"7cba226634c0c204c7237ef81eb4099d0c8caa5d5108b225a537108bcd1faa23"} Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.707008 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wd74" event={"ID":"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9","Type":"ContainerStarted","Data":"3f83e2655f70e87926df98c9ac07cc69cd0ba10a49dcb460755318d0cd6761dd"} Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.707037 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wd74" event={"ID":"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9","Type":"ContainerStarted","Data":"030b6bc3235cdaa5cfe4cd2c9e20ca98e60147133e98241a6d0ff95b6ab9b896"} Dec 04 06:23:41 crc kubenswrapper[4832]: I1204 06:23:41.707048 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wd74" event={"ID":"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9","Type":"ContainerStarted","Data":"99714c51b21a442d7c8bcd289fbc69cbe4c659b36b4bdac37571eb42d4e75288"} Dec 04 06:23:42 crc kubenswrapper[4832]: I1204 06:23:42.726494 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wd74" event={"ID":"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9","Type":"ContainerStarted","Data":"54ccdb197c823564588debd469bfc5e05b33d056262bc6c02f6b712fbf361961"} Dec 04 06:23:42 crc kubenswrapper[4832]: I1204 06:23:42.726836 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wd74" event={"ID":"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9","Type":"ContainerStarted","Data":"a138f48ed8619feb2fa30bb4e5875efea053ff7c307e69ee97bf60d6b132f68b"} Dec 04 06:23:43 crc kubenswrapper[4832]: I1204 06:23:43.738751 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9wd74" event={"ID":"e4f68a6a-9df0-4ad3-bb51-b662bfb994e9","Type":"ContainerStarted","Data":"649b97c3c6268cac93b0437c175360099e6727f4038644ee67a3ade3ddbcac85"} Dec 04 06:23:43 crc kubenswrapper[4832]: I1204 06:23:43.739093 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:43 crc kubenswrapper[4832]: I1204 06:23:43.765376 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9wd74" podStartSLOduration=6.808399659 podStartE2EDuration="17.765345978s" podCreationTimestamp="2025-12-04 06:23:26 +0000 UTC" firstStartedPulling="2025-12-04 06:23:27.178547713 +0000 UTC m=+862.791365409" lastFinishedPulling="2025-12-04 06:23:38.135494022 +0000 UTC m=+873.748311728" observedRunningTime="2025-12-04 06:23:43.763497462 +0000 UTC m=+879.376315168" watchObservedRunningTime="2025-12-04 06:23:43.765345978 +0000 UTC m=+879.378163684" Dec 04 06:23:44 crc kubenswrapper[4832]: I1204 06:23:44.433045 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nbglj"] Dec 04 06:23:45 crc kubenswrapper[4832]: I1204 06:23:45.042595 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hzcq6"] Dec 04 06:23:45 crc kubenswrapper[4832]: I1204 06:23:45.043339 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hzcq6" Dec 04 06:23:45 crc kubenswrapper[4832]: I1204 06:23:45.063233 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hzcq6"] Dec 04 06:23:45 crc kubenswrapper[4832]: I1204 06:23:45.193319 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6smx\" (UniqueName: \"kubernetes.io/projected/37203d32-9ea9-4649-b269-71beabc056f9-kube-api-access-l6smx\") pod \"openstack-operator-index-hzcq6\" (UID: \"37203d32-9ea9-4649-b269-71beabc056f9\") " pod="openstack-operators/openstack-operator-index-hzcq6" Dec 04 06:23:45 crc kubenswrapper[4832]: I1204 06:23:45.294897 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6smx\" (UniqueName: \"kubernetes.io/projected/37203d32-9ea9-4649-b269-71beabc056f9-kube-api-access-l6smx\") pod \"openstack-operator-index-hzcq6\" (UID: \"37203d32-9ea9-4649-b269-71beabc056f9\") " pod="openstack-operators/openstack-operator-index-hzcq6" Dec 04 06:23:45 crc kubenswrapper[4832]: I1204 06:23:45.318473 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6smx\" (UniqueName: \"kubernetes.io/projected/37203d32-9ea9-4649-b269-71beabc056f9-kube-api-access-l6smx\") pod \"openstack-operator-index-hzcq6\" (UID: \"37203d32-9ea9-4649-b269-71beabc056f9\") " pod="openstack-operators/openstack-operator-index-hzcq6" Dec 04 06:23:45 crc kubenswrapper[4832]: I1204 06:23:45.413887 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hzcq6" Dec 04 06:23:45 crc kubenswrapper[4832]: I1204 06:23:45.652119 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hzcq6"] Dec 04 06:23:45 crc kubenswrapper[4832]: W1204 06:23:45.659609 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37203d32_9ea9_4649_b269_71beabc056f9.slice/crio-270844f6bb8b53b57d8d3c16614ff68edb8a5d2c14259309dadb543ec7d5f62e WatchSource:0}: Error finding container 270844f6bb8b53b57d8d3c16614ff68edb8a5d2c14259309dadb543ec7d5f62e: Status 404 returned error can't find the container with id 270844f6bb8b53b57d8d3c16614ff68edb8a5d2c14259309dadb543ec7d5f62e Dec 04 06:23:45 crc kubenswrapper[4832]: I1204 06:23:45.752306 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nbglj" event={"ID":"f1b56287-16d8-4d37-b202-06a57da39fc1","Type":"ContainerStarted","Data":"c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7"} Dec 04 06:23:45 crc kubenswrapper[4832]: I1204 06:23:45.752643 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-nbglj" podUID="f1b56287-16d8-4d37-b202-06a57da39fc1" containerName="registry-server" containerID="cri-o://c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7" gracePeriod=2 Dec 04 06:23:45 crc kubenswrapper[4832]: I1204 06:23:45.754293 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hzcq6" event={"ID":"37203d32-9ea9-4649-b269-71beabc056f9","Type":"ContainerStarted","Data":"270844f6bb8b53b57d8d3c16614ff68edb8a5d2c14259309dadb543ec7d5f62e"} Dec 04 06:23:45 crc kubenswrapper[4832]: I1204 06:23:45.773050 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nbglj" podStartSLOduration=1.086026208 podStartE2EDuration="4.773029186s" podCreationTimestamp="2025-12-04 06:23:41 +0000 UTC" firstStartedPulling="2025-12-04 06:23:41.578683899 +0000 UTC m=+877.191501605" lastFinishedPulling="2025-12-04 06:23:45.265686877 +0000 UTC m=+880.878504583" observedRunningTime="2025-12-04 06:23:45.772034001 +0000 UTC m=+881.384851717" watchObservedRunningTime="2025-12-04 06:23:45.773029186 +0000 UTC m=+881.385846892" Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.083557 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nbglj" Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.205737 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd9wl\" (UniqueName: \"kubernetes.io/projected/f1b56287-16d8-4d37-b202-06a57da39fc1-kube-api-access-rd9wl\") pod \"f1b56287-16d8-4d37-b202-06a57da39fc1\" (UID: \"f1b56287-16d8-4d37-b202-06a57da39fc1\") " Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.212666 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b56287-16d8-4d37-b202-06a57da39fc1-kube-api-access-rd9wl" (OuterVolumeSpecName: "kube-api-access-rd9wl") pod "f1b56287-16d8-4d37-b202-06a57da39fc1" (UID: "f1b56287-16d8-4d37-b202-06a57da39fc1"). InnerVolumeSpecName "kube-api-access-rd9wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.322676 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd9wl\" (UniqueName: \"kubernetes.io/projected/f1b56287-16d8-4d37-b202-06a57da39fc1-kube-api-access-rd9wl\") on node \"crc\" DevicePath \"\"" Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.762590 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hzcq6" event={"ID":"37203d32-9ea9-4649-b269-71beabc056f9","Type":"ContainerStarted","Data":"8bd0e10b40e78d7d62fbb3c9132323faf336d4cd97cbc42a8d44450cfef10772"} Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.765990 4832 generic.go:334] "Generic (PLEG): container finished" podID="f1b56287-16d8-4d37-b202-06a57da39fc1" containerID="c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7" exitCode=0 Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.766029 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nbglj" event={"ID":"f1b56287-16d8-4d37-b202-06a57da39fc1","Type":"ContainerDied","Data":"c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7"} Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.766053 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nbglj" event={"ID":"f1b56287-16d8-4d37-b202-06a57da39fc1","Type":"ContainerDied","Data":"7cba226634c0c204c7237ef81eb4099d0c8caa5d5108b225a537108bcd1faa23"} Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.766069 4832 scope.go:117] "RemoveContainer" containerID="c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7" Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.766070 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nbglj" Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.787040 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hzcq6" podStartSLOduration=1.7269187879999999 podStartE2EDuration="1.787020919s" podCreationTimestamp="2025-12-04 06:23:45 +0000 UTC" firstStartedPulling="2025-12-04 06:23:45.662868035 +0000 UTC m=+881.275685741" lastFinishedPulling="2025-12-04 06:23:45.722970166 +0000 UTC m=+881.335787872" observedRunningTime="2025-12-04 06:23:46.780064245 +0000 UTC m=+882.392881971" watchObservedRunningTime="2025-12-04 06:23:46.787020919 +0000 UTC m=+882.399838635" Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.794630 4832 scope.go:117] "RemoveContainer" containerID="c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7" Dec 04 06:23:46 crc kubenswrapper[4832]: E1204 06:23:46.795226 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7\": container with ID starting with c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7 not found: ID does not exist" containerID="c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7" Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.795294 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7"} err="failed to get container status \"c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7\": rpc error: code = NotFound desc = could not find container \"c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7\": container with ID starting with c0444d231a13fc9e42fbfd68e787013351457f26cba29a35570cd37ca29745c7 not found: ID does not exist" Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.799338 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nbglj"] Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.805449 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-nbglj"] Dec 04 06:23:46 crc kubenswrapper[4832]: I1204 06:23:46.875472 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-gclzl" Dec 04 06:23:47 crc kubenswrapper[4832]: I1204 06:23:47.046219 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:47 crc kubenswrapper[4832]: I1204 06:23:47.083640 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:48 crc kubenswrapper[4832]: I1204 06:23:48.734692 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1b56287-16d8-4d37-b202-06a57da39fc1" path="/var/lib/kubelet/pods/f1b56287-16d8-4d37-b202-06a57da39fc1/volumes" Dec 04 06:23:55 crc kubenswrapper[4832]: I1204 06:23:55.414893 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-hzcq6" Dec 04 06:23:55 crc kubenswrapper[4832]: I1204 06:23:55.415259 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-hzcq6" Dec 04 06:23:55 crc kubenswrapper[4832]: I1204 06:23:55.440672 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-hzcq6" Dec 04 06:23:55 crc kubenswrapper[4832]: I1204 06:23:55.851384 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-hzcq6" Dec 04 06:23:57 crc kubenswrapper[4832]: I1204 06:23:57.049365 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9wd74" Dec 04 06:23:57 crc kubenswrapper[4832]: I1204 06:23:57.077254 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9h2b6" Dec 04 06:24:01 crc kubenswrapper[4832]: I1204 06:24:01.986643 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr"] Dec 04 06:24:01 crc kubenswrapper[4832]: E1204 06:24:01.987157 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b56287-16d8-4d37-b202-06a57da39fc1" containerName="registry-server" Dec 04 06:24:01 crc kubenswrapper[4832]: I1204 06:24:01.987170 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b56287-16d8-4d37-b202-06a57da39fc1" containerName="registry-server" Dec 04 06:24:01 crc kubenswrapper[4832]: I1204 06:24:01.987306 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1b56287-16d8-4d37-b202-06a57da39fc1" containerName="registry-server" Dec 04 06:24:01 crc kubenswrapper[4832]: I1204 06:24:01.988309 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:01 crc kubenswrapper[4832]: I1204 06:24:01.990108 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zjtvp" Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.002273 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr"] Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.086632 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbjmg\" (UniqueName: \"kubernetes.io/projected/b7bde3e6-de8b-40eb-abe9-6c923b41530b-kube-api-access-hbjmg\") pod \"94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr\" (UID: \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\") " pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.086751 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7bde3e6-de8b-40eb-abe9-6c923b41530b-util\") pod \"94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr\" (UID: \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\") " pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.086842 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7bde3e6-de8b-40eb-abe9-6c923b41530b-bundle\") pod \"94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr\" (UID: \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\") " pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.188440 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbjmg\" (UniqueName: \"kubernetes.io/projected/b7bde3e6-de8b-40eb-abe9-6c923b41530b-kube-api-access-hbjmg\") pod \"94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr\" (UID: \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\") " pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.188539 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7bde3e6-de8b-40eb-abe9-6c923b41530b-util\") pod \"94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr\" (UID: \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\") " pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.188642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7bde3e6-de8b-40eb-abe9-6c923b41530b-bundle\") pod \"94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr\" (UID: \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\") " pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.189229 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7bde3e6-de8b-40eb-abe9-6c923b41530b-bundle\") pod \"94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr\" (UID: \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\") " pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.190005 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7bde3e6-de8b-40eb-abe9-6c923b41530b-util\") pod \"94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr\" (UID: \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\") " pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.229720 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbjmg\" (UniqueName: \"kubernetes.io/projected/b7bde3e6-de8b-40eb-abe9-6c923b41530b-kube-api-access-hbjmg\") pod \"94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr\" (UID: \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\") " pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.306762 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.778585 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr"] Dec 04 06:24:02 crc kubenswrapper[4832]: W1204 06:24:02.783265 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7bde3e6_de8b_40eb_abe9_6c923b41530b.slice/crio-9549a0490225f5e79a7705d119985ec6ac2ff7b7d6645508d316c87c5e8fa735 WatchSource:0}: Error finding container 9549a0490225f5e79a7705d119985ec6ac2ff7b7d6645508d316c87c5e8fa735: Status 404 returned error can't find the container with id 9549a0490225f5e79a7705d119985ec6ac2ff7b7d6645508d316c87c5e8fa735 Dec 04 06:24:02 crc kubenswrapper[4832]: I1204 06:24:02.872492 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" event={"ID":"b7bde3e6-de8b-40eb-abe9-6c923b41530b","Type":"ContainerStarted","Data":"9549a0490225f5e79a7705d119985ec6ac2ff7b7d6645508d316c87c5e8fa735"} Dec 04 06:24:03 crc kubenswrapper[4832]: I1204 06:24:03.881321 4832 generic.go:334] "Generic (PLEG): container finished" podID="b7bde3e6-de8b-40eb-abe9-6c923b41530b" containerID="a4fba0d296ff97a0e9baaa9b6a489834b5789974428284b6825f914307f219b3" exitCode=0 Dec 04 06:24:03 crc kubenswrapper[4832]: I1204 06:24:03.881422 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" event={"ID":"b7bde3e6-de8b-40eb-abe9-6c923b41530b","Type":"ContainerDied","Data":"a4fba0d296ff97a0e9baaa9b6a489834b5789974428284b6825f914307f219b3"} Dec 04 06:24:04 crc kubenswrapper[4832]: I1204 06:24:04.890605 4832 generic.go:334] "Generic (PLEG): container finished" podID="b7bde3e6-de8b-40eb-abe9-6c923b41530b" containerID="990e348fa311c719a02d52c2b73558c006f00be8fab2f1b69a163e366489f1f5" exitCode=0 Dec 04 06:24:04 crc kubenswrapper[4832]: I1204 06:24:04.890698 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" event={"ID":"b7bde3e6-de8b-40eb-abe9-6c923b41530b","Type":"ContainerDied","Data":"990e348fa311c719a02d52c2b73558c006f00be8fab2f1b69a163e366489f1f5"} Dec 04 06:24:05 crc kubenswrapper[4832]: I1204 06:24:05.362466 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:24:05 crc kubenswrapper[4832]: I1204 06:24:05.362596 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:24:05 crc kubenswrapper[4832]: I1204 06:24:05.901429 4832 generic.go:334] "Generic (PLEG): container finished" podID="b7bde3e6-de8b-40eb-abe9-6c923b41530b" containerID="53f2d5224671c00356357f4254a277eded2be5f644cbd0a2509a28e8cc144ac4" exitCode=0 Dec 04 06:24:05 crc kubenswrapper[4832]: I1204 06:24:05.901489 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" event={"ID":"b7bde3e6-de8b-40eb-abe9-6c923b41530b","Type":"ContainerDied","Data":"53f2d5224671c00356357f4254a277eded2be5f644cbd0a2509a28e8cc144ac4"} Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.172981 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.368788 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbjmg\" (UniqueName: \"kubernetes.io/projected/b7bde3e6-de8b-40eb-abe9-6c923b41530b-kube-api-access-hbjmg\") pod \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\" (UID: \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\") " Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.369047 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7bde3e6-de8b-40eb-abe9-6c923b41530b-bundle\") pod \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\" (UID: \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\") " Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.369088 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7bde3e6-de8b-40eb-abe9-6c923b41530b-util\") pod \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\" (UID: \"b7bde3e6-de8b-40eb-abe9-6c923b41530b\") " Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.370002 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7bde3e6-de8b-40eb-abe9-6c923b41530b-bundle" (OuterVolumeSpecName: "bundle") pod "b7bde3e6-de8b-40eb-abe9-6c923b41530b" (UID: "b7bde3e6-de8b-40eb-abe9-6c923b41530b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.375972 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7bde3e6-de8b-40eb-abe9-6c923b41530b-kube-api-access-hbjmg" (OuterVolumeSpecName: "kube-api-access-hbjmg") pod "b7bde3e6-de8b-40eb-abe9-6c923b41530b" (UID: "b7bde3e6-de8b-40eb-abe9-6c923b41530b"). InnerVolumeSpecName "kube-api-access-hbjmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.382972 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7bde3e6-de8b-40eb-abe9-6c923b41530b-util" (OuterVolumeSpecName: "util") pod "b7bde3e6-de8b-40eb-abe9-6c923b41530b" (UID: "b7bde3e6-de8b-40eb-abe9-6c923b41530b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.470357 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbjmg\" (UniqueName: \"kubernetes.io/projected/b7bde3e6-de8b-40eb-abe9-6c923b41530b-kube-api-access-hbjmg\") on node \"crc\" DevicePath \"\"" Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.470420 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7bde3e6-de8b-40eb-abe9-6c923b41530b-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.470433 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7bde3e6-de8b-40eb-abe9-6c923b41530b-util\") on node \"crc\" DevicePath \"\"" Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.918827 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" event={"ID":"b7bde3e6-de8b-40eb-abe9-6c923b41530b","Type":"ContainerDied","Data":"9549a0490225f5e79a7705d119985ec6ac2ff7b7d6645508d316c87c5e8fa735"} Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.918890 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9549a0490225f5e79a7705d119985ec6ac2ff7b7d6645508d316c87c5e8fa735" Dec 04 06:24:07 crc kubenswrapper[4832]: I1204 06:24:07.918928 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr" Dec 04 06:24:14 crc kubenswrapper[4832]: I1204 06:24:14.566249 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr"] Dec 04 06:24:14 crc kubenswrapper[4832]: E1204 06:24:14.566861 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7bde3e6-de8b-40eb-abe9-6c923b41530b" containerName="util" Dec 04 06:24:14 crc kubenswrapper[4832]: I1204 06:24:14.566874 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7bde3e6-de8b-40eb-abe9-6c923b41530b" containerName="util" Dec 04 06:24:14 crc kubenswrapper[4832]: E1204 06:24:14.566889 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7bde3e6-de8b-40eb-abe9-6c923b41530b" containerName="extract" Dec 04 06:24:14 crc kubenswrapper[4832]: I1204 06:24:14.566898 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7bde3e6-de8b-40eb-abe9-6c923b41530b" containerName="extract" Dec 04 06:24:14 crc kubenswrapper[4832]: E1204 06:24:14.566909 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7bde3e6-de8b-40eb-abe9-6c923b41530b" containerName="pull" Dec 04 06:24:14 crc kubenswrapper[4832]: I1204 06:24:14.566917 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7bde3e6-de8b-40eb-abe9-6c923b41530b" containerName="pull" Dec 04 06:24:14 crc kubenswrapper[4832]: I1204 06:24:14.567030 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7bde3e6-de8b-40eb-abe9-6c923b41530b" containerName="extract" Dec 04 06:24:14 crc kubenswrapper[4832]: I1204 06:24:14.567504 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr" Dec 04 06:24:14 crc kubenswrapper[4832]: I1204 06:24:14.570484 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-xtnnl" Dec 04 06:24:14 crc kubenswrapper[4832]: I1204 06:24:14.593782 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr"] Dec 04 06:24:14 crc kubenswrapper[4832]: I1204 06:24:14.666308 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zlf8\" (UniqueName: \"kubernetes.io/projected/5413f6c9-52d6-44d8-b58b-babf5f5d4541-kube-api-access-4zlf8\") pod \"openstack-operator-controller-operator-7c75cfccc8-zchmr\" (UID: \"5413f6c9-52d6-44d8-b58b-babf5f5d4541\") " pod="openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr" Dec 04 06:24:14 crc kubenswrapper[4832]: I1204 06:24:14.767295 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zlf8\" (UniqueName: \"kubernetes.io/projected/5413f6c9-52d6-44d8-b58b-babf5f5d4541-kube-api-access-4zlf8\") pod \"openstack-operator-controller-operator-7c75cfccc8-zchmr\" (UID: \"5413f6c9-52d6-44d8-b58b-babf5f5d4541\") " pod="openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr" Dec 04 06:24:14 crc kubenswrapper[4832]: I1204 06:24:14.785800 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zlf8\" (UniqueName: \"kubernetes.io/projected/5413f6c9-52d6-44d8-b58b-babf5f5d4541-kube-api-access-4zlf8\") pod \"openstack-operator-controller-operator-7c75cfccc8-zchmr\" (UID: \"5413f6c9-52d6-44d8-b58b-babf5f5d4541\") " pod="openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr" Dec 04 06:24:14 crc kubenswrapper[4832]: I1204 06:24:14.890935 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr" Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.068799 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6hdwp"] Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.070041 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.071855 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdff6907-f1d5-4dc1-a0da-06efe9a19640-utilities\") pod \"certified-operators-6hdwp\" (UID: \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\") " pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.071921 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdff6907-f1d5-4dc1-a0da-06efe9a19640-catalog-content\") pod \"certified-operators-6hdwp\" (UID: \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\") " pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.071967 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4kc9\" (UniqueName: \"kubernetes.io/projected/cdff6907-f1d5-4dc1-a0da-06efe9a19640-kube-api-access-d4kc9\") pod \"certified-operators-6hdwp\" (UID: \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\") " pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.079844 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hdwp"] Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.115321 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr"] Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.173664 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdff6907-f1d5-4dc1-a0da-06efe9a19640-utilities\") pod \"certified-operators-6hdwp\" (UID: \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\") " pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.173744 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdff6907-f1d5-4dc1-a0da-06efe9a19640-catalog-content\") pod \"certified-operators-6hdwp\" (UID: \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\") " pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.173789 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4kc9\" (UniqueName: \"kubernetes.io/projected/cdff6907-f1d5-4dc1-a0da-06efe9a19640-kube-api-access-d4kc9\") pod \"certified-operators-6hdwp\" (UID: \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\") " pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.175196 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdff6907-f1d5-4dc1-a0da-06efe9a19640-utilities\") pod \"certified-operators-6hdwp\" (UID: \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\") " pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.175452 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdff6907-f1d5-4dc1-a0da-06efe9a19640-catalog-content\") pod \"certified-operators-6hdwp\" (UID: \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\") " pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.194155 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4kc9\" (UniqueName: \"kubernetes.io/projected/cdff6907-f1d5-4dc1-a0da-06efe9a19640-kube-api-access-d4kc9\") pod \"certified-operators-6hdwp\" (UID: \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\") " pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.391717 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.623347 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hdwp"] Dec 04 06:24:15 crc kubenswrapper[4832]: W1204 06:24:15.643896 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdff6907_f1d5_4dc1_a0da_06efe9a19640.slice/crio-93d9bf49761a379300a16e35f57d80806cb3378f545bfe07579f30f832468e4c WatchSource:0}: Error finding container 93d9bf49761a379300a16e35f57d80806cb3378f545bfe07579f30f832468e4c: Status 404 returned error can't find the container with id 93d9bf49761a379300a16e35f57d80806cb3378f545bfe07579f30f832468e4c Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.982759 4832 generic.go:334] "Generic (PLEG): container finished" podID="cdff6907-f1d5-4dc1-a0da-06efe9a19640" containerID="d59836911e75fbb70c12f702325479cdc9dda8c2ef6616ebcd9f9d4d2463161d" exitCode=0 Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.982837 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdwp" event={"ID":"cdff6907-f1d5-4dc1-a0da-06efe9a19640","Type":"ContainerDied","Data":"d59836911e75fbb70c12f702325479cdc9dda8c2ef6616ebcd9f9d4d2463161d"} Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.982927 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdwp" event={"ID":"cdff6907-f1d5-4dc1-a0da-06efe9a19640","Type":"ContainerStarted","Data":"93d9bf49761a379300a16e35f57d80806cb3378f545bfe07579f30f832468e4c"} Dec 04 06:24:15 crc kubenswrapper[4832]: I1204 06:24:15.987185 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr" event={"ID":"5413f6c9-52d6-44d8-b58b-babf5f5d4541","Type":"ContainerStarted","Data":"53565925c5afc08f89ad66563b29a69210902900972617dd0e20b8e5b0efa0f2"} Dec 04 06:24:17 crc kubenswrapper[4832]: I1204 06:24:17.012727 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdwp" event={"ID":"cdff6907-f1d5-4dc1-a0da-06efe9a19640","Type":"ContainerStarted","Data":"06509213f87f0f89c26083167909664c8455059c0f2d39ac0f095f92e3ba7e29"} Dec 04 06:24:18 crc kubenswrapper[4832]: I1204 06:24:18.036472 4832 generic.go:334] "Generic (PLEG): container finished" podID="cdff6907-f1d5-4dc1-a0da-06efe9a19640" containerID="06509213f87f0f89c26083167909664c8455059c0f2d39ac0f095f92e3ba7e29" exitCode=0 Dec 04 06:24:18 crc kubenswrapper[4832]: I1204 06:24:18.036661 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdwp" event={"ID":"cdff6907-f1d5-4dc1-a0da-06efe9a19640","Type":"ContainerDied","Data":"06509213f87f0f89c26083167909664c8455059c0f2d39ac0f095f92e3ba7e29"} Dec 04 06:24:22 crc kubenswrapper[4832]: I1204 06:24:22.063844 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdwp" event={"ID":"cdff6907-f1d5-4dc1-a0da-06efe9a19640","Type":"ContainerStarted","Data":"20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c"} Dec 04 06:24:22 crc kubenswrapper[4832]: I1204 06:24:22.065683 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr" event={"ID":"5413f6c9-52d6-44d8-b58b-babf5f5d4541","Type":"ContainerStarted","Data":"30a2f5eb264e51d74d439d1311610b9c26a4c8d7b21cb35fa77cc74ab4fbdbf3"} Dec 04 06:24:22 crc kubenswrapper[4832]: I1204 06:24:22.065847 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr" Dec 04 06:24:22 crc kubenswrapper[4832]: I1204 06:24:22.122911 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr" podStartSLOduration=1.5327123230000002 podStartE2EDuration="8.122895423s" podCreationTimestamp="2025-12-04 06:24:14 +0000 UTC" firstStartedPulling="2025-12-04 06:24:15.141408583 +0000 UTC m=+910.754226289" lastFinishedPulling="2025-12-04 06:24:21.731591663 +0000 UTC m=+917.344409389" observedRunningTime="2025-12-04 06:24:22.120246508 +0000 UTC m=+917.733064224" watchObservedRunningTime="2025-12-04 06:24:22.122895423 +0000 UTC m=+917.735713119" Dec 04 06:24:22 crc kubenswrapper[4832]: I1204 06:24:22.123890 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6hdwp" podStartSLOduration=1.4016256999999999 podStartE2EDuration="7.123883948s" podCreationTimestamp="2025-12-04 06:24:15 +0000 UTC" firstStartedPulling="2025-12-04 06:24:15.986584245 +0000 UTC m=+911.599401951" lastFinishedPulling="2025-12-04 06:24:21.708842483 +0000 UTC m=+917.321660199" observedRunningTime="2025-12-04 06:24:22.084946782 +0000 UTC m=+917.697764488" watchObservedRunningTime="2025-12-04 06:24:22.123883948 +0000 UTC m=+917.736701654" Dec 04 06:24:25 crc kubenswrapper[4832]: I1204 06:24:25.392660 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:25 crc kubenswrapper[4832]: I1204 06:24:25.393354 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:25 crc kubenswrapper[4832]: I1204 06:24:25.435245 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:26 crc kubenswrapper[4832]: I1204 06:24:26.131332 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:27 crc kubenswrapper[4832]: I1204 06:24:27.857714 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hdwp"] Dec 04 06:24:28 crc kubenswrapper[4832]: I1204 06:24:28.105296 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6hdwp" podUID="cdff6907-f1d5-4dc1-a0da-06efe9a19640" containerName="registry-server" containerID="cri-o://20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c" gracePeriod=2 Dec 04 06:24:28 crc kubenswrapper[4832]: I1204 06:24:28.955668 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.080479 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4kc9\" (UniqueName: \"kubernetes.io/projected/cdff6907-f1d5-4dc1-a0da-06efe9a19640-kube-api-access-d4kc9\") pod \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\" (UID: \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\") " Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.080543 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdff6907-f1d5-4dc1-a0da-06efe9a19640-catalog-content\") pod \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\" (UID: \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\") " Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.080680 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdff6907-f1d5-4dc1-a0da-06efe9a19640-utilities\") pod \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\" (UID: \"cdff6907-f1d5-4dc1-a0da-06efe9a19640\") " Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.081837 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdff6907-f1d5-4dc1-a0da-06efe9a19640-utilities" (OuterVolumeSpecName: "utilities") pod "cdff6907-f1d5-4dc1-a0da-06efe9a19640" (UID: "cdff6907-f1d5-4dc1-a0da-06efe9a19640"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.087531 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdff6907-f1d5-4dc1-a0da-06efe9a19640-kube-api-access-d4kc9" (OuterVolumeSpecName: "kube-api-access-d4kc9") pod "cdff6907-f1d5-4dc1-a0da-06efe9a19640" (UID: "cdff6907-f1d5-4dc1-a0da-06efe9a19640"). InnerVolumeSpecName "kube-api-access-d4kc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.113519 4832 generic.go:334] "Generic (PLEG): container finished" podID="cdff6907-f1d5-4dc1-a0da-06efe9a19640" containerID="20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c" exitCode=0 Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.113566 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdwp" event={"ID":"cdff6907-f1d5-4dc1-a0da-06efe9a19640","Type":"ContainerDied","Data":"20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c"} Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.113595 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hdwp" event={"ID":"cdff6907-f1d5-4dc1-a0da-06efe9a19640","Type":"ContainerDied","Data":"93d9bf49761a379300a16e35f57d80806cb3378f545bfe07579f30f832468e4c"} Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.113612 4832 scope.go:117] "RemoveContainer" containerID="20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.113730 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hdwp" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.125845 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdff6907-f1d5-4dc1-a0da-06efe9a19640-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cdff6907-f1d5-4dc1-a0da-06efe9a19640" (UID: "cdff6907-f1d5-4dc1-a0da-06efe9a19640"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.130543 4832 scope.go:117] "RemoveContainer" containerID="06509213f87f0f89c26083167909664c8455059c0f2d39ac0f095f92e3ba7e29" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.150541 4832 scope.go:117] "RemoveContainer" containerID="d59836911e75fbb70c12f702325479cdc9dda8c2ef6616ebcd9f9d4d2463161d" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.178147 4832 scope.go:117] "RemoveContainer" containerID="20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c" Dec 04 06:24:29 crc kubenswrapper[4832]: E1204 06:24:29.178688 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c\": container with ID starting with 20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c not found: ID does not exist" containerID="20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.178732 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c"} err="failed to get container status \"20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c\": rpc error: code = NotFound desc = could not find container \"20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c\": container with ID starting with 20e61190da4aa602456d09c918bb4494830b47af9fa10fffdbf88060ae447e9c not found: ID does not exist" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.178765 4832 scope.go:117] "RemoveContainer" containerID="06509213f87f0f89c26083167909664c8455059c0f2d39ac0f095f92e3ba7e29" Dec 04 06:24:29 crc kubenswrapper[4832]: E1204 06:24:29.179072 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06509213f87f0f89c26083167909664c8455059c0f2d39ac0f095f92e3ba7e29\": container with ID starting with 06509213f87f0f89c26083167909664c8455059c0f2d39ac0f095f92e3ba7e29 not found: ID does not exist" containerID="06509213f87f0f89c26083167909664c8455059c0f2d39ac0f095f92e3ba7e29" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.179118 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06509213f87f0f89c26083167909664c8455059c0f2d39ac0f095f92e3ba7e29"} err="failed to get container status \"06509213f87f0f89c26083167909664c8455059c0f2d39ac0f095f92e3ba7e29\": rpc error: code = NotFound desc = could not find container \"06509213f87f0f89c26083167909664c8455059c0f2d39ac0f095f92e3ba7e29\": container with ID starting with 06509213f87f0f89c26083167909664c8455059c0f2d39ac0f095f92e3ba7e29 not found: ID does not exist" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.179146 4832 scope.go:117] "RemoveContainer" containerID="d59836911e75fbb70c12f702325479cdc9dda8c2ef6616ebcd9f9d4d2463161d" Dec 04 06:24:29 crc kubenswrapper[4832]: E1204 06:24:29.179432 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d59836911e75fbb70c12f702325479cdc9dda8c2ef6616ebcd9f9d4d2463161d\": container with ID starting with d59836911e75fbb70c12f702325479cdc9dda8c2ef6616ebcd9f9d4d2463161d not found: ID does not exist" containerID="d59836911e75fbb70c12f702325479cdc9dda8c2ef6616ebcd9f9d4d2463161d" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.179464 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d59836911e75fbb70c12f702325479cdc9dda8c2ef6616ebcd9f9d4d2463161d"} err="failed to get container status \"d59836911e75fbb70c12f702325479cdc9dda8c2ef6616ebcd9f9d4d2463161d\": rpc error: code = NotFound desc = could not find container \"d59836911e75fbb70c12f702325479cdc9dda8c2ef6616ebcd9f9d4d2463161d\": container with ID starting with d59836911e75fbb70c12f702325479cdc9dda8c2ef6616ebcd9f9d4d2463161d not found: ID does not exist" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.182352 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdff6907-f1d5-4dc1-a0da-06efe9a19640-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.182372 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4kc9\" (UniqueName: \"kubernetes.io/projected/cdff6907-f1d5-4dc1-a0da-06efe9a19640-kube-api-access-d4kc9\") on node \"crc\" DevicePath \"\"" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.182382 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdff6907-f1d5-4dc1-a0da-06efe9a19640-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.457152 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hdwp"] Dec 04 06:24:29 crc kubenswrapper[4832]: I1204 06:24:29.460995 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6hdwp"] Dec 04 06:24:30 crc kubenswrapper[4832]: I1204 06:24:30.718613 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdff6907-f1d5-4dc1-a0da-06efe9a19640" path="/var/lib/kubelet/pods/cdff6907-f1d5-4dc1-a0da-06efe9a19640/volumes" Dec 04 06:24:34 crc kubenswrapper[4832]: I1204 06:24:34.894940 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7c75cfccc8-zchmr" Dec 04 06:24:35 crc kubenswrapper[4832]: I1204 06:24:35.362592 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:24:35 crc kubenswrapper[4832]: I1204 06:24:35.362647 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:24:35 crc kubenswrapper[4832]: I1204 06:24:35.362693 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:24:35 crc kubenswrapper[4832]: I1204 06:24:35.363273 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16a2a9ef3e62675c85662671dfe30288c81082d91cc1c7e3a8b0d7e2b9dfbee1"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 06:24:35 crc kubenswrapper[4832]: I1204 06:24:35.363347 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://16a2a9ef3e62675c85662671dfe30288c81082d91cc1c7e3a8b0d7e2b9dfbee1" gracePeriod=600 Dec 04 06:24:36 crc kubenswrapper[4832]: I1204 06:24:36.164504 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="16a2a9ef3e62675c85662671dfe30288c81082d91cc1c7e3a8b0d7e2b9dfbee1" exitCode=0 Dec 04 06:24:36 crc kubenswrapper[4832]: I1204 06:24:36.164563 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"16a2a9ef3e62675c85662671dfe30288c81082d91cc1c7e3a8b0d7e2b9dfbee1"} Dec 04 06:24:36 crc kubenswrapper[4832]: I1204 06:24:36.165128 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"f9320b2ca718b6f93f88166b331265dcdcf00ab62d4761ea9c83e290c8013b61"} Dec 04 06:24:36 crc kubenswrapper[4832]: I1204 06:24:36.165185 4832 scope.go:117] "RemoveContainer" containerID="469128422ffdf7c9a116b1453571faa4112e83e21b46cb276494efc9be588617" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.462499 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z8hxf"] Dec 04 06:24:41 crc kubenswrapper[4832]: E1204 06:24:41.463408 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdff6907-f1d5-4dc1-a0da-06efe9a19640" containerName="extract-content" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.463425 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdff6907-f1d5-4dc1-a0da-06efe9a19640" containerName="extract-content" Dec 04 06:24:41 crc kubenswrapper[4832]: E1204 06:24:41.463446 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdff6907-f1d5-4dc1-a0da-06efe9a19640" containerName="registry-server" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.463461 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdff6907-f1d5-4dc1-a0da-06efe9a19640" containerName="registry-server" Dec 04 06:24:41 crc kubenswrapper[4832]: E1204 06:24:41.463479 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdff6907-f1d5-4dc1-a0da-06efe9a19640" containerName="extract-utilities" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.463487 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdff6907-f1d5-4dc1-a0da-06efe9a19640" containerName="extract-utilities" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.463621 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdff6907-f1d5-4dc1-a0da-06efe9a19640" containerName="registry-server" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.464531 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.488531 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8hxf"] Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.649849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-utilities\") pod \"redhat-marketplace-z8hxf\" (UID: \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\") " pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.650350 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mdpr\" (UniqueName: \"kubernetes.io/projected/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-kube-api-access-8mdpr\") pod \"redhat-marketplace-z8hxf\" (UID: \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\") " pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.650501 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-catalog-content\") pod \"redhat-marketplace-z8hxf\" (UID: \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\") " pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.751205 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mdpr\" (UniqueName: \"kubernetes.io/projected/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-kube-api-access-8mdpr\") pod \"redhat-marketplace-z8hxf\" (UID: \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\") " pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.751583 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-catalog-content\") pod \"redhat-marketplace-z8hxf\" (UID: \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\") " pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.751721 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-utilities\") pod \"redhat-marketplace-z8hxf\" (UID: \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\") " pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.752077 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-catalog-content\") pod \"redhat-marketplace-z8hxf\" (UID: \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\") " pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.752267 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-utilities\") pod \"redhat-marketplace-z8hxf\" (UID: \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\") " pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:41 crc kubenswrapper[4832]: I1204 06:24:41.783440 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mdpr\" (UniqueName: \"kubernetes.io/projected/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-kube-api-access-8mdpr\") pod \"redhat-marketplace-z8hxf\" (UID: \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\") " pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:42 crc kubenswrapper[4832]: I1204 06:24:42.082580 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:42 crc kubenswrapper[4832]: I1204 06:24:42.525190 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8hxf"] Dec 04 06:24:43 crc kubenswrapper[4832]: I1204 06:24:43.223868 4832 generic.go:334] "Generic (PLEG): container finished" podID="17a87100-1f0e-4ad2-b5a0-cf1b3241e678" containerID="7c321d2521cde9db6c0f9dc00d323b1ea3d5ec351beff1fdc805236737f7a85b" exitCode=0 Dec 04 06:24:43 crc kubenswrapper[4832]: I1204 06:24:43.223920 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8hxf" event={"ID":"17a87100-1f0e-4ad2-b5a0-cf1b3241e678","Type":"ContainerDied","Data":"7c321d2521cde9db6c0f9dc00d323b1ea3d5ec351beff1fdc805236737f7a85b"} Dec 04 06:24:43 crc kubenswrapper[4832]: I1204 06:24:43.223948 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8hxf" event={"ID":"17a87100-1f0e-4ad2-b5a0-cf1b3241e678","Type":"ContainerStarted","Data":"8f0f563ef184ab75c66b15d1a29dd686c4ab4fda3f477defcdb65630d906fe2c"} Dec 04 06:24:44 crc kubenswrapper[4832]: I1204 06:24:44.232512 4832 generic.go:334] "Generic (PLEG): container finished" podID="17a87100-1f0e-4ad2-b5a0-cf1b3241e678" containerID="efabf04fe18470897d8750eb208285d2e095e7e186f82d227931dd53c8e51328" exitCode=0 Dec 04 06:24:44 crc kubenswrapper[4832]: I1204 06:24:44.232691 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8hxf" event={"ID":"17a87100-1f0e-4ad2-b5a0-cf1b3241e678","Type":"ContainerDied","Data":"efabf04fe18470897d8750eb208285d2e095e7e186f82d227931dd53c8e51328"} Dec 04 06:24:45 crc kubenswrapper[4832]: I1204 06:24:45.252280 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8hxf" event={"ID":"17a87100-1f0e-4ad2-b5a0-cf1b3241e678","Type":"ContainerStarted","Data":"c52f3caffbf60b7c4e958c04bf7eda82742967ede6c15f2e5eab68cb738dec78"} Dec 04 06:24:45 crc kubenswrapper[4832]: I1204 06:24:45.275525 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z8hxf" podStartSLOduration=2.84103474 podStartE2EDuration="4.275505926s" podCreationTimestamp="2025-12-04 06:24:41 +0000 UTC" firstStartedPulling="2025-12-04 06:24:43.226102822 +0000 UTC m=+938.838920518" lastFinishedPulling="2025-12-04 06:24:44.660573998 +0000 UTC m=+940.273391704" observedRunningTime="2025-12-04 06:24:45.269175368 +0000 UTC m=+940.881993074" watchObservedRunningTime="2025-12-04 06:24:45.275505926 +0000 UTC m=+940.888323632" Dec 04 06:24:47 crc kubenswrapper[4832]: I1204 06:24:47.869011 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bfcb6"] Dec 04 06:24:47 crc kubenswrapper[4832]: I1204 06:24:47.870542 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:47 crc kubenswrapper[4832]: I1204 06:24:47.898940 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bfcb6"] Dec 04 06:24:48 crc kubenswrapper[4832]: I1204 06:24:48.033028 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db5ac33c-54f7-4306-bc07-6ff6fd971437-catalog-content\") pod \"community-operators-bfcb6\" (UID: \"db5ac33c-54f7-4306-bc07-6ff6fd971437\") " pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:48 crc kubenswrapper[4832]: I1204 06:24:48.033076 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bdzw\" (UniqueName: \"kubernetes.io/projected/db5ac33c-54f7-4306-bc07-6ff6fd971437-kube-api-access-9bdzw\") pod \"community-operators-bfcb6\" (UID: \"db5ac33c-54f7-4306-bc07-6ff6fd971437\") " pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:48 crc kubenswrapper[4832]: I1204 06:24:48.033150 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db5ac33c-54f7-4306-bc07-6ff6fd971437-utilities\") pod \"community-operators-bfcb6\" (UID: \"db5ac33c-54f7-4306-bc07-6ff6fd971437\") " pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:48 crc kubenswrapper[4832]: I1204 06:24:48.133863 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db5ac33c-54f7-4306-bc07-6ff6fd971437-utilities\") pod \"community-operators-bfcb6\" (UID: \"db5ac33c-54f7-4306-bc07-6ff6fd971437\") " pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:48 crc kubenswrapper[4832]: I1204 06:24:48.134182 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db5ac33c-54f7-4306-bc07-6ff6fd971437-catalog-content\") pod \"community-operators-bfcb6\" (UID: \"db5ac33c-54f7-4306-bc07-6ff6fd971437\") " pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:48 crc kubenswrapper[4832]: I1204 06:24:48.134215 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bdzw\" (UniqueName: \"kubernetes.io/projected/db5ac33c-54f7-4306-bc07-6ff6fd971437-kube-api-access-9bdzw\") pod \"community-operators-bfcb6\" (UID: \"db5ac33c-54f7-4306-bc07-6ff6fd971437\") " pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:48 crc kubenswrapper[4832]: I1204 06:24:48.134371 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db5ac33c-54f7-4306-bc07-6ff6fd971437-utilities\") pod \"community-operators-bfcb6\" (UID: \"db5ac33c-54f7-4306-bc07-6ff6fd971437\") " pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:48 crc kubenswrapper[4832]: I1204 06:24:48.134645 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db5ac33c-54f7-4306-bc07-6ff6fd971437-catalog-content\") pod \"community-operators-bfcb6\" (UID: \"db5ac33c-54f7-4306-bc07-6ff6fd971437\") " pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:48 crc kubenswrapper[4832]: I1204 06:24:48.155192 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bdzw\" (UniqueName: \"kubernetes.io/projected/db5ac33c-54f7-4306-bc07-6ff6fd971437-kube-api-access-9bdzw\") pod \"community-operators-bfcb6\" (UID: \"db5ac33c-54f7-4306-bc07-6ff6fd971437\") " pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:48 crc kubenswrapper[4832]: I1204 06:24:48.189094 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:48 crc kubenswrapper[4832]: I1204 06:24:48.779866 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bfcb6"] Dec 04 06:24:48 crc kubenswrapper[4832]: W1204 06:24:48.789555 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb5ac33c_54f7_4306_bc07_6ff6fd971437.slice/crio-a3a9cff4254312f4d3775b95a3c1f6efe65d83116ca55eb5089b3d12e7c4ea9b WatchSource:0}: Error finding container a3a9cff4254312f4d3775b95a3c1f6efe65d83116ca55eb5089b3d12e7c4ea9b: Status 404 returned error can't find the container with id a3a9cff4254312f4d3775b95a3c1f6efe65d83116ca55eb5089b3d12e7c4ea9b Dec 04 06:24:49 crc kubenswrapper[4832]: I1204 06:24:49.334916 4832 generic.go:334] "Generic (PLEG): container finished" podID="db5ac33c-54f7-4306-bc07-6ff6fd971437" containerID="da3ac7635a5cd2df6b1150fb77366746e79b45041ad61c9e6b0b5c7dcf88676c" exitCode=0 Dec 04 06:24:49 crc kubenswrapper[4832]: I1204 06:24:49.334963 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfcb6" event={"ID":"db5ac33c-54f7-4306-bc07-6ff6fd971437","Type":"ContainerDied","Data":"da3ac7635a5cd2df6b1150fb77366746e79b45041ad61c9e6b0b5c7dcf88676c"} Dec 04 06:24:49 crc kubenswrapper[4832]: I1204 06:24:49.334994 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfcb6" event={"ID":"db5ac33c-54f7-4306-bc07-6ff6fd971437","Type":"ContainerStarted","Data":"a3a9cff4254312f4d3775b95a3c1f6efe65d83116ca55eb5089b3d12e7c4ea9b"} Dec 04 06:24:51 crc kubenswrapper[4832]: I1204 06:24:51.368095 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfcb6" event={"ID":"db5ac33c-54f7-4306-bc07-6ff6fd971437","Type":"ContainerStarted","Data":"22f75ca1e0e1edba5831a4a5e1d78e9b719327e6f9886b055a6efb665ef927a4"} Dec 04 06:24:52 crc kubenswrapper[4832]: I1204 06:24:52.154973 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:52 crc kubenswrapper[4832]: I1204 06:24:52.155280 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:52 crc kubenswrapper[4832]: I1204 06:24:52.365889 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:52 crc kubenswrapper[4832]: I1204 06:24:52.480099 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:53 crc kubenswrapper[4832]: I1204 06:24:53.380433 4832 generic.go:334] "Generic (PLEG): container finished" podID="db5ac33c-54f7-4306-bc07-6ff6fd971437" containerID="22f75ca1e0e1edba5831a4a5e1d78e9b719327e6f9886b055a6efb665ef927a4" exitCode=0 Dec 04 06:24:53 crc kubenswrapper[4832]: I1204 06:24:53.381186 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfcb6" event={"ID":"db5ac33c-54f7-4306-bc07-6ff6fd971437","Type":"ContainerDied","Data":"22f75ca1e0e1edba5831a4a5e1d78e9b719327e6f9886b055a6efb665ef927a4"} Dec 04 06:24:54 crc kubenswrapper[4832]: I1204 06:24:54.389488 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfcb6" event={"ID":"db5ac33c-54f7-4306-bc07-6ff6fd971437","Type":"ContainerStarted","Data":"11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a"} Dec 04 06:24:54 crc kubenswrapper[4832]: I1204 06:24:54.414603 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bfcb6" podStartSLOduration=2.989790201 podStartE2EDuration="7.414567854s" podCreationTimestamp="2025-12-04 06:24:47 +0000 UTC" firstStartedPulling="2025-12-04 06:24:49.336831468 +0000 UTC m=+944.949649174" lastFinishedPulling="2025-12-04 06:24:53.761609121 +0000 UTC m=+949.374426827" observedRunningTime="2025-12-04 06:24:54.414231115 +0000 UTC m=+950.027048821" watchObservedRunningTime="2025-12-04 06:24:54.414567854 +0000 UTC m=+950.027385580" Dec 04 06:24:56 crc kubenswrapper[4832]: I1204 06:24:56.070109 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8hxf"] Dec 04 06:24:56 crc kubenswrapper[4832]: I1204 06:24:56.080653 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z8hxf" podUID="17a87100-1f0e-4ad2-b5a0-cf1b3241e678" containerName="registry-server" containerID="cri-o://c52f3caffbf60b7c4e958c04bf7eda82742967ede6c15f2e5eab68cb738dec78" gracePeriod=2 Dec 04 06:24:57 crc kubenswrapper[4832]: I1204 06:24:57.414709 4832 generic.go:334] "Generic (PLEG): container finished" podID="17a87100-1f0e-4ad2-b5a0-cf1b3241e678" containerID="c52f3caffbf60b7c4e958c04bf7eda82742967ede6c15f2e5eab68cb738dec78" exitCode=0 Dec 04 06:24:57 crc kubenswrapper[4832]: I1204 06:24:57.414791 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8hxf" event={"ID":"17a87100-1f0e-4ad2-b5a0-cf1b3241e678","Type":"ContainerDied","Data":"c52f3caffbf60b7c4e958c04bf7eda82742967ede6c15f2e5eab68cb738dec78"} Dec 04 06:24:57 crc kubenswrapper[4832]: I1204 06:24:57.678070 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:57 crc kubenswrapper[4832]: I1204 06:24:57.752683 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-catalog-content\") pod \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\" (UID: \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\") " Dec 04 06:24:57 crc kubenswrapper[4832]: I1204 06:24:57.752767 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-utilities\") pod \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\" (UID: \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\") " Dec 04 06:24:57 crc kubenswrapper[4832]: I1204 06:24:57.753484 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-utilities" (OuterVolumeSpecName: "utilities") pod "17a87100-1f0e-4ad2-b5a0-cf1b3241e678" (UID: "17a87100-1f0e-4ad2-b5a0-cf1b3241e678"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:24:57 crc kubenswrapper[4832]: I1204 06:24:57.753579 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mdpr\" (UniqueName: \"kubernetes.io/projected/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-kube-api-access-8mdpr\") pod \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\" (UID: \"17a87100-1f0e-4ad2-b5a0-cf1b3241e678\") " Dec 04 06:24:57 crc kubenswrapper[4832]: I1204 06:24:57.754907 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:24:57 crc kubenswrapper[4832]: I1204 06:24:57.763552 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-kube-api-access-8mdpr" (OuterVolumeSpecName: "kube-api-access-8mdpr") pod "17a87100-1f0e-4ad2-b5a0-cf1b3241e678" (UID: "17a87100-1f0e-4ad2-b5a0-cf1b3241e678"). InnerVolumeSpecName "kube-api-access-8mdpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:24:57 crc kubenswrapper[4832]: I1204 06:24:57.769862 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17a87100-1f0e-4ad2-b5a0-cf1b3241e678" (UID: "17a87100-1f0e-4ad2-b5a0-cf1b3241e678"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:24:57 crc kubenswrapper[4832]: I1204 06:24:57.855839 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mdpr\" (UniqueName: \"kubernetes.io/projected/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-kube-api-access-8mdpr\") on node \"crc\" DevicePath \"\"" Dec 04 06:24:57 crc kubenswrapper[4832]: I1204 06:24:57.855875 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17a87100-1f0e-4ad2-b5a0-cf1b3241e678-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:24:58 crc kubenswrapper[4832]: I1204 06:24:58.233707 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:58 crc kubenswrapper[4832]: I1204 06:24:58.234109 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:58 crc kubenswrapper[4832]: I1204 06:24:58.408673 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:58 crc kubenswrapper[4832]: I1204 06:24:58.422492 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8hxf" event={"ID":"17a87100-1f0e-4ad2-b5a0-cf1b3241e678","Type":"ContainerDied","Data":"8f0f563ef184ab75c66b15d1a29dd686c4ab4fda3f477defcdb65630d906fe2c"} Dec 04 06:24:58 crc kubenswrapper[4832]: I1204 06:24:58.422537 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8hxf" Dec 04 06:24:58 crc kubenswrapper[4832]: I1204 06:24:58.422557 4832 scope.go:117] "RemoveContainer" containerID="c52f3caffbf60b7c4e958c04bf7eda82742967ede6c15f2e5eab68cb738dec78" Dec 04 06:24:58 crc kubenswrapper[4832]: I1204 06:24:58.460884 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8hxf"] Dec 04 06:24:58 crc kubenswrapper[4832]: I1204 06:24:58.468114 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8hxf"] Dec 04 06:24:58 crc kubenswrapper[4832]: I1204 06:24:58.477049 4832 scope.go:117] "RemoveContainer" containerID="efabf04fe18470897d8750eb208285d2e095e7e186f82d227931dd53c8e51328" Dec 04 06:24:58 crc kubenswrapper[4832]: I1204 06:24:58.517506 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:24:58 crc kubenswrapper[4832]: I1204 06:24:58.522157 4832 scope.go:117] "RemoveContainer" containerID="7c321d2521cde9db6c0f9dc00d323b1ea3d5ec351beff1fdc805236737f7a85b" Dec 04 06:24:58 crc kubenswrapper[4832]: I1204 06:24:58.718458 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17a87100-1f0e-4ad2-b5a0-cf1b3241e678" path="/var/lib/kubelet/pods/17a87100-1f0e-4ad2-b5a0-cf1b3241e678/volumes" Dec 04 06:25:00 crc kubenswrapper[4832]: I1204 06:25:00.653838 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bfcb6"] Dec 04 06:25:00 crc kubenswrapper[4832]: I1204 06:25:00.654345 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bfcb6" podUID="db5ac33c-54f7-4306-bc07-6ff6fd971437" containerName="registry-server" containerID="cri-o://11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a" gracePeriod=2 Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.050936 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.133766 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bdzw\" (UniqueName: \"kubernetes.io/projected/db5ac33c-54f7-4306-bc07-6ff6fd971437-kube-api-access-9bdzw\") pod \"db5ac33c-54f7-4306-bc07-6ff6fd971437\" (UID: \"db5ac33c-54f7-4306-bc07-6ff6fd971437\") " Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.134233 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db5ac33c-54f7-4306-bc07-6ff6fd971437-utilities\") pod \"db5ac33c-54f7-4306-bc07-6ff6fd971437\" (UID: \"db5ac33c-54f7-4306-bc07-6ff6fd971437\") " Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.134291 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db5ac33c-54f7-4306-bc07-6ff6fd971437-catalog-content\") pod \"db5ac33c-54f7-4306-bc07-6ff6fd971437\" (UID: \"db5ac33c-54f7-4306-bc07-6ff6fd971437\") " Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.135019 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db5ac33c-54f7-4306-bc07-6ff6fd971437-utilities" (OuterVolumeSpecName: "utilities") pod "db5ac33c-54f7-4306-bc07-6ff6fd971437" (UID: "db5ac33c-54f7-4306-bc07-6ff6fd971437"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.148015 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5ac33c-54f7-4306-bc07-6ff6fd971437-kube-api-access-9bdzw" (OuterVolumeSpecName: "kube-api-access-9bdzw") pod "db5ac33c-54f7-4306-bc07-6ff6fd971437" (UID: "db5ac33c-54f7-4306-bc07-6ff6fd971437"). InnerVolumeSpecName "kube-api-access-9bdzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.188221 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db5ac33c-54f7-4306-bc07-6ff6fd971437-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db5ac33c-54f7-4306-bc07-6ff6fd971437" (UID: "db5ac33c-54f7-4306-bc07-6ff6fd971437"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.236510 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bdzw\" (UniqueName: \"kubernetes.io/projected/db5ac33c-54f7-4306-bc07-6ff6fd971437-kube-api-access-9bdzw\") on node \"crc\" DevicePath \"\"" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.236540 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db5ac33c-54f7-4306-bc07-6ff6fd971437-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.236549 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db5ac33c-54f7-4306-bc07-6ff6fd971437-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.445660 4832 generic.go:334] "Generic (PLEG): container finished" podID="db5ac33c-54f7-4306-bc07-6ff6fd971437" containerID="11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a" exitCode=0 Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.445712 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfcb6" event={"ID":"db5ac33c-54f7-4306-bc07-6ff6fd971437","Type":"ContainerDied","Data":"11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a"} Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.445742 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfcb6" event={"ID":"db5ac33c-54f7-4306-bc07-6ff6fd971437","Type":"ContainerDied","Data":"a3a9cff4254312f4d3775b95a3c1f6efe65d83116ca55eb5089b3d12e7c4ea9b"} Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.445762 4832 scope.go:117] "RemoveContainer" containerID="11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.445879 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfcb6" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.473605 4832 scope.go:117] "RemoveContainer" containerID="22f75ca1e0e1edba5831a4a5e1d78e9b719327e6f9886b055a6efb665ef927a4" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.475326 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bfcb6"] Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.481687 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bfcb6"] Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.490325 4832 scope.go:117] "RemoveContainer" containerID="da3ac7635a5cd2df6b1150fb77366746e79b45041ad61c9e6b0b5c7dcf88676c" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.514555 4832 scope.go:117] "RemoveContainer" containerID="11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a" Dec 04 06:25:01 crc kubenswrapper[4832]: E1204 06:25:01.515527 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a\": container with ID starting with 11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a not found: ID does not exist" containerID="11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.515564 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a"} err="failed to get container status \"11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a\": rpc error: code = NotFound desc = could not find container \"11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a\": container with ID starting with 11469c7a5315441c0163a5dd0777b2074ac0932310a317c3ccdc1c9f8116270a not found: ID does not exist" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.515587 4832 scope.go:117] "RemoveContainer" containerID="22f75ca1e0e1edba5831a4a5e1d78e9b719327e6f9886b055a6efb665ef927a4" Dec 04 06:25:01 crc kubenswrapper[4832]: E1204 06:25:01.515813 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22f75ca1e0e1edba5831a4a5e1d78e9b719327e6f9886b055a6efb665ef927a4\": container with ID starting with 22f75ca1e0e1edba5831a4a5e1d78e9b719327e6f9886b055a6efb665ef927a4 not found: ID does not exist" containerID="22f75ca1e0e1edba5831a4a5e1d78e9b719327e6f9886b055a6efb665ef927a4" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.515840 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f75ca1e0e1edba5831a4a5e1d78e9b719327e6f9886b055a6efb665ef927a4"} err="failed to get container status \"22f75ca1e0e1edba5831a4a5e1d78e9b719327e6f9886b055a6efb665ef927a4\": rpc error: code = NotFound desc = could not find container \"22f75ca1e0e1edba5831a4a5e1d78e9b719327e6f9886b055a6efb665ef927a4\": container with ID starting with 22f75ca1e0e1edba5831a4a5e1d78e9b719327e6f9886b055a6efb665ef927a4 not found: ID does not exist" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.515855 4832 scope.go:117] "RemoveContainer" containerID="da3ac7635a5cd2df6b1150fb77366746e79b45041ad61c9e6b0b5c7dcf88676c" Dec 04 06:25:01 crc kubenswrapper[4832]: E1204 06:25:01.516038 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da3ac7635a5cd2df6b1150fb77366746e79b45041ad61c9e6b0b5c7dcf88676c\": container with ID starting with da3ac7635a5cd2df6b1150fb77366746e79b45041ad61c9e6b0b5c7dcf88676c not found: ID does not exist" containerID="da3ac7635a5cd2df6b1150fb77366746e79b45041ad61c9e6b0b5c7dcf88676c" Dec 04 06:25:01 crc kubenswrapper[4832]: I1204 06:25:01.516067 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da3ac7635a5cd2df6b1150fb77366746e79b45041ad61c9e6b0b5c7dcf88676c"} err="failed to get container status \"da3ac7635a5cd2df6b1150fb77366746e79b45041ad61c9e6b0b5c7dcf88676c\": rpc error: code = NotFound desc = could not find container \"da3ac7635a5cd2df6b1150fb77366746e79b45041ad61c9e6b0b5c7dcf88676c\": container with ID starting with da3ac7635a5cd2df6b1150fb77366746e79b45041ad61c9e6b0b5c7dcf88676c not found: ID does not exist" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.350294 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr"] Dec 04 06:25:02 crc kubenswrapper[4832]: E1204 06:25:02.351562 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a87100-1f0e-4ad2-b5a0-cf1b3241e678" containerName="registry-server" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.351647 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a87100-1f0e-4ad2-b5a0-cf1b3241e678" containerName="registry-server" Dec 04 06:25:02 crc kubenswrapper[4832]: E1204 06:25:02.351708 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5ac33c-54f7-4306-bc07-6ff6fd971437" containerName="extract-utilities" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.351769 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5ac33c-54f7-4306-bc07-6ff6fd971437" containerName="extract-utilities" Dec 04 06:25:02 crc kubenswrapper[4832]: E1204 06:25:02.351823 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a87100-1f0e-4ad2-b5a0-cf1b3241e678" containerName="extract-utilities" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.351875 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a87100-1f0e-4ad2-b5a0-cf1b3241e678" containerName="extract-utilities" Dec 04 06:25:02 crc kubenswrapper[4832]: E1204 06:25:02.351934 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5ac33c-54f7-4306-bc07-6ff6fd971437" containerName="extract-content" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.351990 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5ac33c-54f7-4306-bc07-6ff6fd971437" containerName="extract-content" Dec 04 06:25:02 crc kubenswrapper[4832]: E1204 06:25:02.352046 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5ac33c-54f7-4306-bc07-6ff6fd971437" containerName="registry-server" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.352104 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5ac33c-54f7-4306-bc07-6ff6fd971437" containerName="registry-server" Dec 04 06:25:02 crc kubenswrapper[4832]: E1204 06:25:02.352166 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a87100-1f0e-4ad2-b5a0-cf1b3241e678" containerName="extract-content" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.352256 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a87100-1f0e-4ad2-b5a0-cf1b3241e678" containerName="extract-content" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.352449 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a87100-1f0e-4ad2-b5a0-cf1b3241e678" containerName="registry-server" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.352527 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="db5ac33c-54f7-4306-bc07-6ff6fd971437" containerName="registry-server" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.353296 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.355686 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-t7p2s" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.362679 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.363802 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.368153 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jftrs" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.373221 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.379102 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.383516 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.390804 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-hk2s9" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.413445 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.463531 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcw8p\" (UniqueName: \"kubernetes.io/projected/ef8f8bec-efa4-4239-839d-791aed710641-kube-api-access-bcw8p\") pod \"cinder-operator-controller-manager-859b6ccc6-wwmfh\" (UID: \"ef8f8bec-efa4-4239-839d-791aed710641\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.463906 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8px5\" (UniqueName: \"kubernetes.io/projected/49edbb71-76d8-4f14-986d-9fd821c55ff4-kube-api-access-j8px5\") pod \"designate-operator-controller-manager-78b4bc895b-9cmtc\" (UID: \"49edbb71-76d8-4f14-986d-9fd821c55ff4\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.464091 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwvdj\" (UniqueName: \"kubernetes.io/projected/a85cdbe2-2e25-43b2-bcad-55aaf1e6755d-kube-api-access-qwvdj\") pod \"barbican-operator-controller-manager-7d9dfd778-vjxxr\" (UID: \"a85cdbe2-2e25-43b2-bcad-55aaf1e6755d\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.465433 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.466665 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.474235 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tlh7n" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.487019 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.498578 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.518609 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.519834 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.525002 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.529249 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.530548 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.534251 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-p9hkd" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.535922 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-k7dbw" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.549837 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-wr29d"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.551062 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.553630 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-whgz5" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.557534 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.557750 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.565147 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcw8p\" (UniqueName: \"kubernetes.io/projected/ef8f8bec-efa4-4239-839d-791aed710641-kube-api-access-bcw8p\") pod \"cinder-operator-controller-manager-859b6ccc6-wwmfh\" (UID: \"ef8f8bec-efa4-4239-839d-791aed710641\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.565200 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl8zr\" (UniqueName: \"kubernetes.io/projected/860c33f9-d57a-45b6-bc73-670d92e753a4-kube-api-access-vl8zr\") pod \"heat-operator-controller-manager-5f64f6f8bb-7x9qz\" (UID: \"860c33f9-d57a-45b6-bc73-670d92e753a4\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.565265 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8px5\" (UniqueName: \"kubernetes.io/projected/49edbb71-76d8-4f14-986d-9fd821c55ff4-kube-api-access-j8px5\") pod \"designate-operator-controller-manager-78b4bc895b-9cmtc\" (UID: \"49edbb71-76d8-4f14-986d-9fd821c55ff4\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.565293 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bl94\" (UniqueName: \"kubernetes.io/projected/f17d47bc-9039-4195-bdbd-e9f58d4c305b-kube-api-access-5bl94\") pod \"glance-operator-controller-manager-77987cd8cd-s5wdp\" (UID: \"f17d47bc-9039-4195-bdbd-e9f58d4c305b\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.565346 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwvdj\" (UniqueName: \"kubernetes.io/projected/a85cdbe2-2e25-43b2-bcad-55aaf1e6755d-kube-api-access-qwvdj\") pod \"barbican-operator-controller-manager-7d9dfd778-vjxxr\" (UID: \"a85cdbe2-2e25-43b2-bcad-55aaf1e6755d\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.565433 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljb74\" (UniqueName: \"kubernetes.io/projected/e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f-kube-api-access-ljb74\") pod \"horizon-operator-controller-manager-68c6d99b8f-8qb2w\" (UID: \"e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.573664 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-wr29d"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.589881 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.591432 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.594822 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.595977 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-gt75v" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.599440 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.600727 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.611698 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcw8p\" (UniqueName: \"kubernetes.io/projected/ef8f8bec-efa4-4239-839d-791aed710641-kube-api-access-bcw8p\") pod \"cinder-operator-controller-manager-859b6ccc6-wwmfh\" (UID: \"ef8f8bec-efa4-4239-839d-791aed710641\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.625236 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.625992 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-qt8v7" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.627006 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.630095 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8px5\" (UniqueName: \"kubernetes.io/projected/49edbb71-76d8-4f14-986d-9fd821c55ff4-kube-api-access-j8px5\") pod \"designate-operator-controller-manager-78b4bc895b-9cmtc\" (UID: \"49edbb71-76d8-4f14-986d-9fd821c55ff4\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.632687 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-l7g54" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.636085 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.641433 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.641530 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwvdj\" (UniqueName: \"kubernetes.io/projected/a85cdbe2-2e25-43b2-bcad-55aaf1e6755d-kube-api-access-qwvdj\") pod \"barbican-operator-controller-manager-7d9dfd778-vjxxr\" (UID: \"a85cdbe2-2e25-43b2-bcad-55aaf1e6755d\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.649751 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.653957 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.656582 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wlmjp" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.666315 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbsnq\" (UniqueName: \"kubernetes.io/projected/84bf2c21-9b47-46f8-970e-e2e34c5d0112-kube-api-access-tbsnq\") pod \"keystone-operator-controller-manager-7765d96ddf-xd7gs\" (UID: \"84bf2c21-9b47-46f8-970e-e2e34c5d0112\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.666358 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmc62\" (UniqueName: \"kubernetes.io/projected/2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9-kube-api-access-bmc62\") pod \"ironic-operator-controller-manager-6c548fd776-6shfb\" (UID: \"2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.666413 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bl94\" (UniqueName: \"kubernetes.io/projected/f17d47bc-9039-4195-bdbd-e9f58d4c305b-kube-api-access-5bl94\") pod \"glance-operator-controller-manager-77987cd8cd-s5wdp\" (UID: \"f17d47bc-9039-4195-bdbd-e9f58d4c305b\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.666460 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjxpj\" (UniqueName: \"kubernetes.io/projected/69747c52-1139-4d71-be0d-d6b8d534f0bf-kube-api-access-bjxpj\") pod \"infra-operator-controller-manager-57548d458d-wr29d\" (UID: \"69747c52-1139-4d71-be0d-d6b8d534f0bf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.666492 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert\") pod \"infra-operator-controller-manager-57548d458d-wr29d\" (UID: \"69747c52-1139-4d71-be0d-d6b8d534f0bf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.666518 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljb74\" (UniqueName: \"kubernetes.io/projected/e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f-kube-api-access-ljb74\") pod \"horizon-operator-controller-manager-68c6d99b8f-8qb2w\" (UID: \"e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.666548 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtrx8\" (UniqueName: \"kubernetes.io/projected/7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df-kube-api-access-mtrx8\") pod \"manila-operator-controller-manager-7c79b5df47-zd4wx\" (UID: \"7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.666575 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8zr\" (UniqueName: \"kubernetes.io/projected/860c33f9-d57a-45b6-bc73-670d92e753a4-kube-api-access-vl8zr\") pod \"heat-operator-controller-manager-5f64f6f8bb-7x9qz\" (UID: \"860c33f9-d57a-45b6-bc73-670d92e753a4\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.674568 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.688603 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.697984 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.701416 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.703043 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.725761 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.726617 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-m4zt9" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.734822 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bl94\" (UniqueName: \"kubernetes.io/projected/f17d47bc-9039-4195-bdbd-e9f58d4c305b-kube-api-access-5bl94\") pod \"glance-operator-controller-manager-77987cd8cd-s5wdp\" (UID: \"f17d47bc-9039-4195-bdbd-e9f58d4c305b\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.736196 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljb74\" (UniqueName: \"kubernetes.io/projected/e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f-kube-api-access-ljb74\") pod \"horizon-operator-controller-manager-68c6d99b8f-8qb2w\" (UID: \"e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.737945 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl8zr\" (UniqueName: \"kubernetes.io/projected/860c33f9-d57a-45b6-bc73-670d92e753a4-kube-api-access-vl8zr\") pod \"heat-operator-controller-manager-5f64f6f8bb-7x9qz\" (UID: \"860c33f9-d57a-45b6-bc73-670d92e753a4\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.748561 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db5ac33c-54f7-4306-bc07-6ff6fd971437" path="/var/lib/kubelet/pods/db5ac33c-54f7-4306-bc07-6ff6fd971437/volumes" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.749443 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.767444 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.767488 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.767667 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.768577 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.768879 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg4qr\" (UniqueName: \"kubernetes.io/projected/c0cedc81-309b-4d1f-8349-632ca9d38e96-kube-api-access-jg4qr\") pod \"mariadb-operator-controller-manager-56bbcc9d85-hwpjd\" (UID: \"c0cedc81-309b-4d1f-8349-632ca9d38e96\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.768915 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjxpj\" (UniqueName: \"kubernetes.io/projected/69747c52-1139-4d71-be0d-d6b8d534f0bf-kube-api-access-bjxpj\") pod \"infra-operator-controller-manager-57548d458d-wr29d\" (UID: \"69747c52-1139-4d71-be0d-d6b8d534f0bf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.768957 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert\") pod \"infra-operator-controller-manager-57548d458d-wr29d\" (UID: \"69747c52-1139-4d71-be0d-d6b8d534f0bf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.768999 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtrx8\" (UniqueName: \"kubernetes.io/projected/7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df-kube-api-access-mtrx8\") pod \"manila-operator-controller-manager-7c79b5df47-zd4wx\" (UID: \"7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.769061 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrwbb\" (UniqueName: \"kubernetes.io/projected/35d20429-0e0e-4090-8d0b-9a590e8fd9ab-kube-api-access-wrwbb\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-dr2cc\" (UID: \"35d20429-0e0e-4090-8d0b-9a590e8fd9ab\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.769089 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbsnq\" (UniqueName: \"kubernetes.io/projected/84bf2c21-9b47-46f8-970e-e2e34c5d0112-kube-api-access-tbsnq\") pod \"keystone-operator-controller-manager-7765d96ddf-xd7gs\" (UID: \"84bf2c21-9b47-46f8-970e-e2e34c5d0112\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.769115 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmc62\" (UniqueName: \"kubernetes.io/projected/2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9-kube-api-access-bmc62\") pod \"ironic-operator-controller-manager-6c548fd776-6shfb\" (UID: \"2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb" Dec 04 06:25:02 crc kubenswrapper[4832]: E1204 06:25:02.770369 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 04 06:25:02 crc kubenswrapper[4832]: E1204 06:25:02.770460 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert podName:69747c52-1139-4d71-be0d-d6b8d534f0bf nodeName:}" failed. No retries permitted until 2025-12-04 06:25:03.270438873 +0000 UTC m=+958.883256639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert") pod "infra-operator-controller-manager-57548d458d-wr29d" (UID: "69747c52-1139-4d71-be0d-d6b8d534f0bf") : secret "infra-operator-webhook-server-cert" not found Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.785783 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.785843 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.786115 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.787199 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-c7t2j" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.787330 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-fzv7d" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.801474 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.802969 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.812810 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.828774 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vm7np" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.832173 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbsnq\" (UniqueName: \"kubernetes.io/projected/84bf2c21-9b47-46f8-970e-e2e34c5d0112-kube-api-access-tbsnq\") pod \"keystone-operator-controller-manager-7765d96ddf-xd7gs\" (UID: \"84bf2c21-9b47-46f8-970e-e2e34c5d0112\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.837356 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.837996 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmc62\" (UniqueName: \"kubernetes.io/projected/2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9-kube-api-access-bmc62\") pod \"ironic-operator-controller-manager-6c548fd776-6shfb\" (UID: \"2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.847016 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.848686 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.852792 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-8knjb" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.894433 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtrx8\" (UniqueName: \"kubernetes.io/projected/7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df-kube-api-access-mtrx8\") pod \"manila-operator-controller-manager-7c79b5df47-zd4wx\" (UID: \"7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.895447 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjxpj\" (UniqueName: \"kubernetes.io/projected/69747c52-1139-4d71-be0d-d6b8d534f0bf-kube-api-access-bjxpj\") pod \"infra-operator-controller-manager-57548d458d-wr29d\" (UID: \"69747c52-1139-4d71-be0d-d6b8d534f0bf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.913083 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxkj5\" (UniqueName: \"kubernetes.io/projected/81848f9c-5ee4-4fbc-a744-701009bcbe53-kube-api-access-dxkj5\") pod \"nova-operator-controller-manager-697bc559fc-djqmz\" (UID: \"81848f9c-5ee4-4fbc-a744-701009bcbe53\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.913159 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt5f4\" (UniqueName: \"kubernetes.io/projected/e8500aa8-6a4f-4d7b-8939-eab62a946850-kube-api-access-vt5f4\") pod \"octavia-operator-controller-manager-998648c74-zgnkq\" (UID: \"e8500aa8-6a4f-4d7b-8939-eab62a946850\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.913226 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrwbb\" (UniqueName: \"kubernetes.io/projected/35d20429-0e0e-4090-8d0b-9a590e8fd9ab-kube-api-access-wrwbb\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-dr2cc\" (UID: \"35d20429-0e0e-4090-8d0b-9a590e8fd9ab\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.913319 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg4qr\" (UniqueName: \"kubernetes.io/projected/c0cedc81-309b-4d1f-8349-632ca9d38e96-kube-api-access-jg4qr\") pod \"mariadb-operator-controller-manager-56bbcc9d85-hwpjd\" (UID: \"c0cedc81-309b-4d1f-8349-632ca9d38e96\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.913375 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mwqn\" (UniqueName: \"kubernetes.io/projected/63f185bd-a5f7-40a2-b51f-f60bf2c161a9-kube-api-access-2mwqn\") pod \"ovn-operator-controller-manager-b6456fdb6-lbvnf\" (UID: \"63f185bd-a5f7-40a2-b51f-f60bf2c161a9\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.913428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v47ht\" (UniqueName: \"kubernetes.io/projected/4226c957-fd5d-4b1d-84ca-a94e76ff138c-kube-api-access-v47ht\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr\" (UID: \"4226c957-fd5d-4b1d-84ca-a94e76ff138c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.914197 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.914279 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr\" (UID: \"4226c957-fd5d-4b1d-84ca-a94e76ff138c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.940177 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx" Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.951206 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr"] Dec 04 06:25:02 crc kubenswrapper[4832]: I1204 06:25:02.969949 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg4qr\" (UniqueName: \"kubernetes.io/projected/c0cedc81-309b-4d1f-8349-632ca9d38e96-kube-api-access-jg4qr\") pod \"mariadb-operator-controller-manager-56bbcc9d85-hwpjd\" (UID: \"c0cedc81-309b-4d1f-8349-632ca9d38e96\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.025240 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mwqn\" (UniqueName: \"kubernetes.io/projected/63f185bd-a5f7-40a2-b51f-f60bf2c161a9-kube-api-access-2mwqn\") pod \"ovn-operator-controller-manager-b6456fdb6-lbvnf\" (UID: \"63f185bd-a5f7-40a2-b51f-f60bf2c161a9\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.025293 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v47ht\" (UniqueName: \"kubernetes.io/projected/4226c957-fd5d-4b1d-84ca-a94e76ff138c-kube-api-access-v47ht\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr\" (UID: \"4226c957-fd5d-4b1d-84ca-a94e76ff138c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.025318 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr\" (UID: \"4226c957-fd5d-4b1d-84ca-a94e76ff138c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.025380 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxkj5\" (UniqueName: \"kubernetes.io/projected/81848f9c-5ee4-4fbc-a744-701009bcbe53-kube-api-access-dxkj5\") pod \"nova-operator-controller-manager-697bc559fc-djqmz\" (UID: \"81848f9c-5ee4-4fbc-a744-701009bcbe53\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.025426 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt5f4\" (UniqueName: \"kubernetes.io/projected/e8500aa8-6a4f-4d7b-8939-eab62a946850-kube-api-access-vt5f4\") pod \"octavia-operator-controller-manager-998648c74-zgnkq\" (UID: \"e8500aa8-6a4f-4d7b-8939-eab62a946850\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" Dec 04 06:25:03 crc kubenswrapper[4832]: E1204 06:25:03.025947 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 06:25:03 crc kubenswrapper[4832]: E1204 06:25:03.026037 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert podName:4226c957-fd5d-4b1d-84ca-a94e76ff138c nodeName:}" failed. No retries permitted until 2025-12-04 06:25:03.52600716 +0000 UTC m=+959.138824866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" (UID: "4226c957-fd5d-4b1d-84ca-a94e76ff138c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.030267 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.041476 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9"] Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.044581 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.059115 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk"] Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.063066 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.083382 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrwbb\" (UniqueName: \"kubernetes.io/projected/35d20429-0e0e-4090-8d0b-9a590e8fd9ab-kube-api-access-wrwbb\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-dr2cc\" (UID: \"35d20429-0e0e-4090-8d0b-9a590e8fd9ab\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.087554 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-lr247"] Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.091201 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mwqn\" (UniqueName: \"kubernetes.io/projected/63f185bd-a5f7-40a2-b51f-f60bf2c161a9-kube-api-access-2mwqn\") pod \"ovn-operator-controller-manager-b6456fdb6-lbvnf\" (UID: \"63f185bd-a5f7-40a2-b51f-f60bf2c161a9\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.091475 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4f4hq" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.092128 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9"] Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.092189 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf"] Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.092322 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.093288 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6cvtd" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.097170 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxkj5\" (UniqueName: \"kubernetes.io/projected/81848f9c-5ee4-4fbc-a744-701009bcbe53-kube-api-access-dxkj5\") pod \"nova-operator-controller-manager-697bc559fc-djqmz\" (UID: \"81848f9c-5ee4-4fbc-a744-701009bcbe53\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.097706 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v47ht\" (UniqueName: \"kubernetes.io/projected/4226c957-fd5d-4b1d-84ca-a94e76ff138c-kube-api-access-v47ht\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr\" (UID: \"4226c957-fd5d-4b1d-84ca-a94e76ff138c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.101582 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk"] Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.112554 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.220763 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-zc52r"] Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.220830 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.223718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.226925 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-k458f" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.232116 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjw26\" (UniqueName: \"kubernetes.io/projected/7184b79e-0476-4d6d-99f3-329ad46dff61-kube-api-access-wjw26\") pod \"placement-operator-controller-manager-78f8948974-lr247\" (UID: \"7184b79e-0476-4d6d-99f3-329ad46dff61\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.232205 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whxcl\" (UniqueName: \"kubernetes.io/projected/cac84290-1321-4a86-a4c0-06019e9d5dfd-kube-api-access-whxcl\") pod \"swift-operator-controller-manager-5f8c65bbfc-wjbl9\" (UID: \"cac84290-1321-4a86-a4c0-06019e9d5dfd\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.232449 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx7m9\" (UniqueName: \"kubernetes.io/projected/ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea-kube-api-access-sx7m9\") pod \"telemetry-operator-controller-manager-76cc84c6bb-fl4dk\" (UID: \"ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.333744 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx7m9\" (UniqueName: \"kubernetes.io/projected/ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea-kube-api-access-sx7m9\") pod \"telemetry-operator-controller-manager-76cc84c6bb-fl4dk\" (UID: \"ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.333828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjw26\" (UniqueName: \"kubernetes.io/projected/7184b79e-0476-4d6d-99f3-329ad46dff61-kube-api-access-wjw26\") pod \"placement-operator-controller-manager-78f8948974-lr247\" (UID: \"7184b79e-0476-4d6d-99f3-329ad46dff61\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.333891 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whxcl\" (UniqueName: \"kubernetes.io/projected/cac84290-1321-4a86-a4c0-06019e9d5dfd-kube-api-access-whxcl\") pod \"swift-operator-controller-manager-5f8c65bbfc-wjbl9\" (UID: \"cac84290-1321-4a86-a4c0-06019e9d5dfd\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.333941 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert\") pod \"infra-operator-controller-manager-57548d458d-wr29d\" (UID: \"69747c52-1139-4d71-be0d-d6b8d534f0bf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:03 crc kubenswrapper[4832]: E1204 06:25:03.334123 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 04 06:25:03 crc kubenswrapper[4832]: E1204 06:25:03.334182 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert podName:69747c52-1139-4d71-be0d-d6b8d534f0bf nodeName:}" failed. No retries permitted until 2025-12-04 06:25:04.334162567 +0000 UTC m=+959.946980273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert") pod "infra-operator-controller-manager-57548d458d-wr29d" (UID: "69747c52-1139-4d71-be0d-d6b8d534f0bf") : secret "infra-operator-webhook-server-cert" not found Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.356476 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt5f4\" (UniqueName: \"kubernetes.io/projected/e8500aa8-6a4f-4d7b-8939-eab62a946850-kube-api-access-vt5f4\") pod \"octavia-operator-controller-manager-998648c74-zgnkq\" (UID: \"e8500aa8-6a4f-4d7b-8939-eab62a946850\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.359862 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.373557 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.384124 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.455695 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs5gd\" (UniqueName: \"kubernetes.io/projected/f897a405-3157-4e56-b2b8-1076557cab9e-kube-api-access-qs5gd\") pod \"test-operator-controller-manager-5854674fcc-zc52r\" (UID: \"f897a405-3157-4e56-b2b8-1076557cab9e\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.462775 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-lv6m8" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.468748 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-lr247"] Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.588833 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr\" (UID: \"4226c957-fd5d-4b1d-84ca-a94e76ff138c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.670382 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs5gd\" (UniqueName: \"kubernetes.io/projected/f897a405-3157-4e56-b2b8-1076557cab9e-kube-api-access-qs5gd\") pod \"test-operator-controller-manager-5854674fcc-zc52r\" (UID: \"f897a405-3157-4e56-b2b8-1076557cab9e\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" Dec 04 06:25:03 crc kubenswrapper[4832]: E1204 06:25:03.590706 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 06:25:03 crc kubenswrapper[4832]: E1204 06:25:03.670950 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert podName:4226c957-fd5d-4b1d-84ca-a94e76ff138c nodeName:}" failed. No retries permitted until 2025-12-04 06:25:04.670933141 +0000 UTC m=+960.283750847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" (UID: "4226c957-fd5d-4b1d-84ca-a94e76ff138c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.768652 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.972457 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-zc52r"] Dec 04 06:25:03 crc kubenswrapper[4832]: I1204 06:25:03.990796 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjw26\" (UniqueName: \"kubernetes.io/projected/7184b79e-0476-4d6d-99f3-329ad46dff61-kube-api-access-wjw26\") pod \"placement-operator-controller-manager-78f8948974-lr247\" (UID: \"7184b79e-0476-4d6d-99f3-329ad46dff61\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.003966 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx7m9\" (UniqueName: \"kubernetes.io/projected/ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea-kube-api-access-sx7m9\") pod \"telemetry-operator-controller-manager-76cc84c6bb-fl4dk\" (UID: \"ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.008019 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whxcl\" (UniqueName: \"kubernetes.io/projected/cac84290-1321-4a86-a4c0-06019e9d5dfd-kube-api-access-whxcl\") pod \"swift-operator-controller-manager-5f8c65bbfc-wjbl9\" (UID: \"cac84290-1321-4a86-a4c0-06019e9d5dfd\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.014007 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs5gd\" (UniqueName: \"kubernetes.io/projected/f897a405-3157-4e56-b2b8-1076557cab9e-kube-api-access-qs5gd\") pod \"test-operator-controller-manager-5854674fcc-zc52r\" (UID: \"f897a405-3157-4e56-b2b8-1076557cab9e\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.128522 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz"] Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.129728 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.144489 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-flzxh" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.220756 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.221315 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.221599 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.274646 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kq76\" (UniqueName: \"kubernetes.io/projected/e6d35b26-0a9e-4174-a073-d0a608dbafcd-kube-api-access-7kq76\") pod \"watcher-operator-controller-manager-769dc69bc-htcvz\" (UID: \"e6d35b26-0a9e-4174-a073-d0a608dbafcd\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.275279 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.376321 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert\") pod \"infra-operator-controller-manager-57548d458d-wr29d\" (UID: \"69747c52-1139-4d71-be0d-d6b8d534f0bf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.376499 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kq76\" (UniqueName: \"kubernetes.io/projected/e6d35b26-0a9e-4174-a073-d0a608dbafcd-kube-api-access-7kq76\") pod \"watcher-operator-controller-manager-769dc69bc-htcvz\" (UID: \"e6d35b26-0a9e-4174-a073-d0a608dbafcd\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" Dec 04 06:25:04 crc kubenswrapper[4832]: E1204 06:25:04.377261 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 04 06:25:04 crc kubenswrapper[4832]: E1204 06:25:04.377347 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert podName:69747c52-1139-4d71-be0d-d6b8d534f0bf nodeName:}" failed. No retries permitted until 2025-12-04 06:25:06.377327582 +0000 UTC m=+961.990145288 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert") pod "infra-operator-controller-manager-57548d458d-wr29d" (UID: "69747c52-1139-4d71-be0d-d6b8d534f0bf") : secret "infra-operator-webhook-server-cert" not found Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.420380 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz"] Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.435810 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kq76\" (UniqueName: \"kubernetes.io/projected/e6d35b26-0a9e-4174-a073-d0a608dbafcd-kube-api-access-7kq76\") pod \"watcher-operator-controller-manager-769dc69bc-htcvz\" (UID: \"e6d35b26-0a9e-4174-a073-d0a608dbafcd\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.534095 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.566742 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9"] Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.567769 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.570268 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.570295 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.570410 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fccjx" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.603307 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9"] Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.651825 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl"] Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.652789 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.654921 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-4x8g4" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.663249 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl"] Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.685654 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj966\" (UniqueName: \"kubernetes.io/projected/57013f06-c328-4c9c-b4c9-284df662cc0e-kube-api-access-vj966\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.685751 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr\" (UID: \"4226c957-fd5d-4b1d-84ca-a94e76ff138c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.685868 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.685935 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:04 crc kubenswrapper[4832]: E1204 06:25:04.686169 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 06:25:04 crc kubenswrapper[4832]: E1204 06:25:04.686239 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert podName:4226c957-fd5d-4b1d-84ca-a94e76ff138c nodeName:}" failed. No retries permitted until 2025-12-04 06:25:06.686215537 +0000 UTC m=+962.299033243 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" (UID: "4226c957-fd5d-4b1d-84ca-a94e76ff138c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.786901 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.787034 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj966\" (UniqueName: \"kubernetes.io/projected/57013f06-c328-4c9c-b4c9-284df662cc0e-kube-api-access-vj966\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.787141 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjdl4\" (UniqueName: \"kubernetes.io/projected/626ec042-7ccd-4a54-8625-de8861efca16-kube-api-access-xjdl4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qx7fl\" (UID: \"626ec042-7ccd-4a54-8625-de8861efca16\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.787175 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.795379 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.795884 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 04 06:25:04 crc kubenswrapper[4832]: E1204 06:25:04.797902 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 04 06:25:04 crc kubenswrapper[4832]: E1204 06:25:04.798019 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs podName:57013f06-c328-4c9c-b4c9-284df662cc0e nodeName:}" failed. No retries permitted until 2025-12-04 06:25:05.29798491 +0000 UTC m=+960.910802706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs") pod "openstack-operator-controller-manager-5986db9d67-699q9" (UID: "57013f06-c328-4c9c-b4c9-284df662cc0e") : secret "webhook-server-cert" not found Dec 04 06:25:04 crc kubenswrapper[4832]: E1204 06:25:04.798487 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 04 06:25:04 crc kubenswrapper[4832]: E1204 06:25:04.798680 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs podName:57013f06-c328-4c9c-b4c9-284df662cc0e nodeName:}" failed. No retries permitted until 2025-12-04 06:25:05.298648256 +0000 UTC m=+960.911466052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs") pod "openstack-operator-controller-manager-5986db9d67-699q9" (UID: "57013f06-c328-4c9c-b4c9-284df662cc0e") : secret "metrics-server-cert" not found Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.815191 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj966\" (UniqueName: \"kubernetes.io/projected/57013f06-c328-4c9c-b4c9-284df662cc0e-kube-api-access-vj966\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.889008 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjdl4\" (UniqueName: \"kubernetes.io/projected/626ec042-7ccd-4a54-8625-de8861efca16-kube-api-access-xjdl4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qx7fl\" (UID: \"626ec042-7ccd-4a54-8625-de8861efca16\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.908836 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjdl4\" (UniqueName: \"kubernetes.io/projected/626ec042-7ccd-4a54-8625-de8861efca16-kube-api-access-xjdl4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qx7fl\" (UID: \"626ec042-7ccd-4a54-8625-de8861efca16\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" Dec 04 06:25:04 crc kubenswrapper[4832]: I1204 06:25:04.979872 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" Dec 04 06:25:05 crc kubenswrapper[4832]: I1204 06:25:05.371295 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:05 crc kubenswrapper[4832]: I1204 06:25:05.371678 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:05 crc kubenswrapper[4832]: E1204 06:25:05.371532 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 04 06:25:05 crc kubenswrapper[4832]: E1204 06:25:05.371799 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 04 06:25:05 crc kubenswrapper[4832]: E1204 06:25:05.371816 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs podName:57013f06-c328-4c9c-b4c9-284df662cc0e nodeName:}" failed. No retries permitted until 2025-12-04 06:25:06.371794007 +0000 UTC m=+961.984611703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs") pod "openstack-operator-controller-manager-5986db9d67-699q9" (UID: "57013f06-c328-4c9c-b4c9-284df662cc0e") : secret "metrics-server-cert" not found Dec 04 06:25:05 crc kubenswrapper[4832]: E1204 06:25:05.371848 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs podName:57013f06-c328-4c9c-b4c9-284df662cc0e nodeName:}" failed. No retries permitted until 2025-12-04 06:25:06.371831198 +0000 UTC m=+961.984648904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs") pod "openstack-operator-controller-manager-5986db9d67-699q9" (UID: "57013f06-c328-4c9c-b4c9-284df662cc0e") : secret "webhook-server-cert" not found Dec 04 06:25:06 crc kubenswrapper[4832]: I1204 06:25:06.441519 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert\") pod \"infra-operator-controller-manager-57548d458d-wr29d\" (UID: \"69747c52-1139-4d71-be0d-d6b8d534f0bf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:06 crc kubenswrapper[4832]: E1204 06:25:06.441767 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 04 06:25:06 crc kubenswrapper[4832]: E1204 06:25:06.443450 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert podName:69747c52-1139-4d71-be0d-d6b8d534f0bf nodeName:}" failed. No retries permitted until 2025-12-04 06:25:10.443422056 +0000 UTC m=+966.056239762 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert") pod "infra-operator-controller-manager-57548d458d-wr29d" (UID: "69747c52-1139-4d71-be0d-d6b8d534f0bf") : secret "infra-operator-webhook-server-cert" not found Dec 04 06:25:06 crc kubenswrapper[4832]: E1204 06:25:06.444547 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 04 06:25:06 crc kubenswrapper[4832]: E1204 06:25:06.444654 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs podName:57013f06-c328-4c9c-b4c9-284df662cc0e nodeName:}" failed. No retries permitted until 2025-12-04 06:25:08.444642697 +0000 UTC m=+964.057460403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs") pod "openstack-operator-controller-manager-5986db9d67-699q9" (UID: "57013f06-c328-4c9c-b4c9-284df662cc0e") : secret "metrics-server-cert" not found Dec 04 06:25:06 crc kubenswrapper[4832]: I1204 06:25:06.443944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:06 crc kubenswrapper[4832]: I1204 06:25:06.444859 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:06 crc kubenswrapper[4832]: E1204 06:25:06.444933 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 04 06:25:06 crc kubenswrapper[4832]: E1204 06:25:06.445289 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs podName:57013f06-c328-4c9c-b4c9-284df662cc0e nodeName:}" failed. No retries permitted until 2025-12-04 06:25:08.444992446 +0000 UTC m=+964.057810152 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs") pod "openstack-operator-controller-manager-5986db9d67-699q9" (UID: "57013f06-c328-4c9c-b4c9-284df662cc0e") : secret "webhook-server-cert" not found Dec 04 06:25:06 crc kubenswrapper[4832]: I1204 06:25:06.748797 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr\" (UID: \"4226c957-fd5d-4b1d-84ca-a94e76ff138c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:06 crc kubenswrapper[4832]: E1204 06:25:06.748993 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 06:25:06 crc kubenswrapper[4832]: E1204 06:25:06.749040 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert podName:4226c957-fd5d-4b1d-84ca-a94e76ff138c nodeName:}" failed. No retries permitted until 2025-12-04 06:25:10.749023769 +0000 UTC m=+966.361841475 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" (UID: "4226c957-fd5d-4b1d-84ca-a94e76ff138c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 06:25:06 crc kubenswrapper[4832]: I1204 06:25:06.815754 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh"] Dec 04 06:25:06 crc kubenswrapper[4832]: I1204 06:25:06.821254 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd"] Dec 04 06:25:06 crc kubenswrapper[4832]: I1204 06:25:06.844689 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr"] Dec 04 06:25:06 crc kubenswrapper[4832]: I1204 06:25:06.857284 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz"] Dec 04 06:25:06 crc kubenswrapper[4832]: W1204 06:25:06.857890 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81848f9c_5ee4_4fbc_a744_701009bcbe53.slice/crio-27a90d28a8874ac29f092e423c5fc8d039d98fea96c222a83ba29c5f81fb2336 WatchSource:0}: Error finding container 27a90d28a8874ac29f092e423c5fc8d039d98fea96c222a83ba29c5f81fb2336: Status 404 returned error can't find the container with id 27a90d28a8874ac29f092e423c5fc8d039d98fea96c222a83ba29c5f81fb2336 Dec 04 06:25:06 crc kubenswrapper[4832]: I1204 06:25:06.865029 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp"] Dec 04 06:25:06 crc kubenswrapper[4832]: I1204 06:25:06.869194 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc"] Dec 04 06:25:06 crc kubenswrapper[4832]: W1204 06:25:06.873462 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49edbb71_76d8_4f14_986d_9fd821c55ff4.slice/crio-297bde42acb85dee2a883204646e030e792222e00e78926665de7d61d6d83faa WatchSource:0}: Error finding container 297bde42acb85dee2a883204646e030e792222e00e78926665de7d61d6d83faa: Status 404 returned error can't find the container with id 297bde42acb85dee2a883204646e030e792222e00e78926665de7d61d6d83faa Dec 04 06:25:06 crc kubenswrapper[4832]: I1204 06:25:06.876358 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb"] Dec 04 06:25:06 crc kubenswrapper[4832]: I1204 06:25:06.883222 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs"] Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.065802 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx"] Dec 04 06:25:07 crc kubenswrapper[4832]: W1204 06:25:07.071146 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d7242e2_f1a1_4bbc_b9e8_fdb337cc74df.slice/crio-c3ea04ed9c8b6804557e8fae50d22b0504878c80c2ac32ac5d3fe9024c96e8c8 WatchSource:0}: Error finding container c3ea04ed9c8b6804557e8fae50d22b0504878c80c2ac32ac5d3fe9024c96e8c8: Status 404 returned error can't find the container with id c3ea04ed9c8b6804557e8fae50d22b0504878c80c2ac32ac5d3fe9024c96e8c8 Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.103604 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-lr247"] Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.107596 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w"] Dec 04 06:25:07 crc kubenswrapper[4832]: W1204 06:25:07.108451 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2a00f81_6eba_4338_adb6_f7ccfd9ccc4f.slice/crio-b05343cc7fb41552c1c4bbc38d164b7d42c969900ec79a3bf1aa8e15b9478904 WatchSource:0}: Error finding container b05343cc7fb41552c1c4bbc38d164b7d42c969900ec79a3bf1aa8e15b9478904: Status 404 returned error can't find the container with id b05343cc7fb41552c1c4bbc38d164b7d42c969900ec79a3bf1aa8e15b9478904 Dec 04 06:25:07 crc kubenswrapper[4832]: W1204 06:25:07.113308 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35d20429_0e0e_4090_8d0b_9a590e8fd9ab.slice/crio-6d7741e05e7165e0ef2ca668d5021dcd9b9f9eefdeae1f60ade8003a575f48b3 WatchSource:0}: Error finding container 6d7741e05e7165e0ef2ca668d5021dcd9b9f9eefdeae1f60ade8003a575f48b3: Status 404 returned error can't find the container with id 6d7741e05e7165e0ef2ca668d5021dcd9b9f9eefdeae1f60ade8003a575f48b3 Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.146135 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vl8zr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-7x9qz_openstack-operators(860c33f9-d57a-45b6-bc73-670d92e753a4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.147754 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7kq76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-htcvz_openstack-operators(e6d35b26-0a9e-4174-a073-d0a608dbafcd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.148079 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vl8zr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-7x9qz_openstack-operators(860c33f9-d57a-45b6-bc73-670d92e753a4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.148745 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qs5gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-zc52r_openstack-operators(f897a405-3157-4e56-b2b8-1076557cab9e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.149233 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc"] Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.149247 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" podUID="860c33f9-d57a-45b6-bc73-670d92e753a4" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.151287 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjw26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-lr247_openstack-operators(7184b79e-0476-4d6d-99f3-329ad46dff61): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.152216 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7kq76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-htcvz_openstack-operators(e6d35b26-0a9e-4174-a073-d0a608dbafcd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.152261 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qs5gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-zc52r_openstack-operators(f897a405-3157-4e56-b2b8-1076557cab9e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.152860 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-lbvnf_openstack-operators(63f185bd-a5f7-40a2-b51f-f60bf2c161a9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.153597 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" podUID="e6d35b26-0a9e-4174-a073-d0a608dbafcd" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.153595 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" podUID="f897a405-3157-4e56-b2b8-1076557cab9e" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.153700 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjw26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-lr247_openstack-operators(7184b79e-0476-4d6d-99f3-329ad46dff61): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.154021 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjdl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-qx7fl_openstack-operators(626ec042-7ccd-4a54-8625-de8861efca16): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.155331 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" podUID="626ec042-7ccd-4a54-8625-de8861efca16" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.155379 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" podUID="7184b79e-0476-4d6d-99f3-329ad46dff61" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.160176 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mwqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-lbvnf_openstack-operators(63f185bd-a5f7-40a2-b51f-f60bf2c161a9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.160372 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz"] Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.162976 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" podUID="63f185bd-a5f7-40a2-b51f-f60bf2c161a9" Dec 04 06:25:07 crc kubenswrapper[4832]: W1204 06:25:07.165600 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8500aa8_6a4f_4d7b_8939_eab62a946850.slice/crio-e0f6ec3ed712c2ba66c0a58f906c6e52bf620abf2eff215cc4b90b31ba64761a WatchSource:0}: Error finding container e0f6ec3ed712c2ba66c0a58f906c6e52bf620abf2eff215cc4b90b31ba64761a: Status 404 returned error can't find the container with id e0f6ec3ed712c2ba66c0a58f906c6e52bf620abf2eff215cc4b90b31ba64761a Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.169016 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf"] Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.173325 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-whxcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-wjbl9_openstack-operators(cac84290-1321-4a86-a4c0-06019e9d5dfd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.173557 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vt5f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-zgnkq_openstack-operators(e8500aa8-6a4f-4d7b-8939-eab62a946850): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.173586 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz"] Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.175546 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-whxcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-wjbl9_openstack-operators(cac84290-1321-4a86-a4c0-06019e9d5dfd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.176137 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sx7m9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-fl4dk_openstack-operators(ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.176230 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vt5f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-zgnkq_openstack-operators(e8500aa8-6a4f-4d7b-8939-eab62a946850): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.177124 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-zc52r"] Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.177170 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" podUID="cac84290-1321-4a86-a4c0-06019e9d5dfd" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.177997 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sx7m9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-fl4dk_openstack-operators(ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.178072 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" podUID="e8500aa8-6a4f-4d7b-8939-eab62a946850" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.179156 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" podUID="ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea" Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.181988 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl"] Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.186985 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9"] Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.195141 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq"] Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.201199 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk"] Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.652007 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp" event={"ID":"f17d47bc-9039-4195-bdbd-e9f58d4c305b","Type":"ContainerStarted","Data":"344e1d9fa7196e556451d1ec9312bd4f3629e63ae742c3f236335e1171231a4d"} Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.654871 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" event={"ID":"860c33f9-d57a-45b6-bc73-670d92e753a4","Type":"ContainerStarted","Data":"cd0f41235c85764350bf4d66bf309f943f6ed8f5ad40f9ec63f4efee1efc8320"} Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.657850 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd" event={"ID":"c0cedc81-309b-4d1f-8349-632ca9d38e96","Type":"ContainerStarted","Data":"08ac19514286cfcfdecc1295a921899d8b59573b16831e8d2501b621e5a325d2"} Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.658426 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" podUID="860c33f9-d57a-45b6-bc73-670d92e753a4" Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.663850 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh" event={"ID":"ef8f8bec-efa4-4239-839d-791aed710641","Type":"ContainerStarted","Data":"4976428236e0dc31725c4fc9b95c208032fd0f9a1b440e6bc0809a05aedc4d71"} Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.667371 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w" event={"ID":"e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f","Type":"ContainerStarted","Data":"b05343cc7fb41552c1c4bbc38d164b7d42c969900ec79a3bf1aa8e15b9478904"} Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.678322 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx" event={"ID":"7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df","Type":"ContainerStarted","Data":"c3ea04ed9c8b6804557e8fae50d22b0504878c80c2ac32ac5d3fe9024c96e8c8"} Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.683530 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" event={"ID":"63f185bd-a5f7-40a2-b51f-f60bf2c161a9","Type":"ContainerStarted","Data":"bfafdd074abab025b55ca512d1b0d75ae11afbe4c8078b0b38ef8f9ad5eee44f"} Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.685610 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" podUID="63f185bd-a5f7-40a2-b51f-f60bf2c161a9" Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.686266 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" event={"ID":"f897a405-3157-4e56-b2b8-1076557cab9e","Type":"ContainerStarted","Data":"e9efd56145fdd3b4fc6e7163ccc4b029f2bf55c7204bf7419f895d5fb967ed3d"} Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.690576 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" event={"ID":"e6d35b26-0a9e-4174-a073-d0a608dbafcd","Type":"ContainerStarted","Data":"f9a01c2bfcd2d315791752de90771944849b833753c98aa5bdc49874e24886f9"} Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.691497 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" podUID="f897a405-3157-4e56-b2b8-1076557cab9e" Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.692504 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" podUID="e6d35b26-0a9e-4174-a073-d0a608dbafcd" Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.692840 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs" event={"ID":"84bf2c21-9b47-46f8-970e-e2e34c5d0112","Type":"ContainerStarted","Data":"0e6785c11cbbc3b6a4b22a1a4d2dde3bf472352160a6c36cc87c33d93e8e1d68"} Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.695238 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc" event={"ID":"49edbb71-76d8-4f14-986d-9fd821c55ff4","Type":"ContainerStarted","Data":"297bde42acb85dee2a883204646e030e792222e00e78926665de7d61d6d83faa"} Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.697683 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" event={"ID":"cac84290-1321-4a86-a4c0-06019e9d5dfd","Type":"ContainerStarted","Data":"fd2f124c8c70c3ef27b66520d107dffceb13cea5cdd61127500f7f86cddeae07"} Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.701268 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb" event={"ID":"2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9","Type":"ContainerStarted","Data":"90310a1d04db3d1dfbc0e8e0b1ec600346568936f51e3155497a822662fb4ec2"} Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.701631 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" podUID="cac84290-1321-4a86-a4c0-06019e9d5dfd" Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.702542 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc" event={"ID":"35d20429-0e0e-4090-8d0b-9a590e8fd9ab","Type":"ContainerStarted","Data":"6d7741e05e7165e0ef2ca668d5021dcd9b9f9eefdeae1f60ade8003a575f48b3"} Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.704711 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" event={"ID":"7184b79e-0476-4d6d-99f3-329ad46dff61","Type":"ContainerStarted","Data":"c8737f251e2052ae29520797eeb02c8e893190be658d7052b6900a6b90089cba"} Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.732462 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" podUID="7184b79e-0476-4d6d-99f3-329ad46dff61" Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.739547 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" event={"ID":"ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea","Type":"ContainerStarted","Data":"7c1bf38bf30833c6caaf79168e07bd240effbdd33557b8d092500fd4f25d683b"} Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.755848 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" podUID="ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea" Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.756566 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" event={"ID":"e8500aa8-6a4f-4d7b-8939-eab62a946850","Type":"ContainerStarted","Data":"e0f6ec3ed712c2ba66c0a58f906c6e52bf620abf2eff215cc4b90b31ba64761a"} Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.761099 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz" event={"ID":"81848f9c-5ee4-4fbc-a744-701009bcbe53","Type":"ContainerStarted","Data":"27a90d28a8874ac29f092e423c5fc8d039d98fea96c222a83ba29c5f81fb2336"} Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.763179 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" podUID="e8500aa8-6a4f-4d7b-8939-eab62a946850" Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.763320 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr" event={"ID":"a85cdbe2-2e25-43b2-bcad-55aaf1e6755d","Type":"ContainerStarted","Data":"cdd40ffb2757ee5ee29ff330b5b8cf70e9f645d4c9f44675a1ac86249acc19ec"} Dec 04 06:25:07 crc kubenswrapper[4832]: I1204 06:25:07.767020 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" event={"ID":"626ec042-7ccd-4a54-8625-de8861efca16","Type":"ContainerStarted","Data":"9104a87a1357d1f0b3ad07c7a4ed63187dd198a4f703a4113849b5021590538c"} Dec 04 06:25:07 crc kubenswrapper[4832]: E1204 06:25:07.769114 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" podUID="626ec042-7ccd-4a54-8625-de8861efca16" Dec 04 06:25:08 crc kubenswrapper[4832]: I1204 06:25:08.474993 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:08 crc kubenswrapper[4832]: I1204 06:25:08.475167 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.475348 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.475424 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs podName:57013f06-c328-4c9c-b4c9-284df662cc0e nodeName:}" failed. No retries permitted until 2025-12-04 06:25:12.475403474 +0000 UTC m=+968.088221180 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs") pod "openstack-operator-controller-manager-5986db9d67-699q9" (UID: "57013f06-c328-4c9c-b4c9-284df662cc0e") : secret "metrics-server-cert" not found Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.475651 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.475735 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs podName:57013f06-c328-4c9c-b4c9-284df662cc0e nodeName:}" failed. No retries permitted until 2025-12-04 06:25:12.475719082 +0000 UTC m=+968.088536788 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs") pod "openstack-operator-controller-manager-5986db9d67-699q9" (UID: "57013f06-c328-4c9c-b4c9-284df662cc0e") : secret "webhook-server-cert" not found Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.788042 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" podUID="626ec042-7ccd-4a54-8625-de8861efca16" Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.789871 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" podUID="63f185bd-a5f7-40a2-b51f-f60bf2c161a9" Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.789894 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" podUID="ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea" Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.789923 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" podUID="860c33f9-d57a-45b6-bc73-670d92e753a4" Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.792950 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" podUID="f897a405-3157-4e56-b2b8-1076557cab9e" Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.793265 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" podUID="e8500aa8-6a4f-4d7b-8939-eab62a946850" Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.793269 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" podUID="7184b79e-0476-4d6d-99f3-329ad46dff61" Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.793305 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" podUID="cac84290-1321-4a86-a4c0-06019e9d5dfd" Dec 04 06:25:08 crc kubenswrapper[4832]: E1204 06:25:08.793856 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" podUID="e6d35b26-0a9e-4174-a073-d0a608dbafcd" Dec 04 06:25:10 crc kubenswrapper[4832]: I1204 06:25:10.511540 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert\") pod \"infra-operator-controller-manager-57548d458d-wr29d\" (UID: \"69747c52-1139-4d71-be0d-d6b8d534f0bf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:10 crc kubenswrapper[4832]: E1204 06:25:10.511739 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 04 06:25:10 crc kubenswrapper[4832]: E1204 06:25:10.512080 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert podName:69747c52-1139-4d71-be0d-d6b8d534f0bf nodeName:}" failed. No retries permitted until 2025-12-04 06:25:18.51204377 +0000 UTC m=+974.124861476 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert") pod "infra-operator-controller-manager-57548d458d-wr29d" (UID: "69747c52-1139-4d71-be0d-d6b8d534f0bf") : secret "infra-operator-webhook-server-cert" not found Dec 04 06:25:10 crc kubenswrapper[4832]: I1204 06:25:10.815321 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr\" (UID: \"4226c957-fd5d-4b1d-84ca-a94e76ff138c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:10 crc kubenswrapper[4832]: E1204 06:25:10.815557 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 06:25:10 crc kubenswrapper[4832]: E1204 06:25:10.815651 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert podName:4226c957-fd5d-4b1d-84ca-a94e76ff138c nodeName:}" failed. No retries permitted until 2025-12-04 06:25:18.815627961 +0000 UTC m=+974.428445667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" (UID: "4226c957-fd5d-4b1d-84ca-a94e76ff138c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 06:25:12 crc kubenswrapper[4832]: I1204 06:25:12.537895 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:12 crc kubenswrapper[4832]: I1204 06:25:12.538086 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:12 crc kubenswrapper[4832]: E1204 06:25:12.538084 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 04 06:25:12 crc kubenswrapper[4832]: E1204 06:25:12.538168 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs podName:57013f06-c328-4c9c-b4c9-284df662cc0e nodeName:}" failed. No retries permitted until 2025-12-04 06:25:20.538150101 +0000 UTC m=+976.150967807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs") pod "openstack-operator-controller-manager-5986db9d67-699q9" (UID: "57013f06-c328-4c9c-b4c9-284df662cc0e") : secret "webhook-server-cert" not found Dec 04 06:25:12 crc kubenswrapper[4832]: E1204 06:25:12.538287 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 04 06:25:12 crc kubenswrapper[4832]: E1204 06:25:12.538436 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs podName:57013f06-c328-4c9c-b4c9-284df662cc0e nodeName:}" failed. No retries permitted until 2025-12-04 06:25:20.538331435 +0000 UTC m=+976.151149141 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs") pod "openstack-operator-controller-manager-5986db9d67-699q9" (UID: "57013f06-c328-4c9c-b4c9-284df662cc0e") : secret "metrics-server-cert" not found Dec 04 06:25:18 crc kubenswrapper[4832]: I1204 06:25:18.536771 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert\") pod \"infra-operator-controller-manager-57548d458d-wr29d\" (UID: \"69747c52-1139-4d71-be0d-d6b8d534f0bf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:18 crc kubenswrapper[4832]: I1204 06:25:18.547156 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69747c52-1139-4d71-be0d-d6b8d534f0bf-cert\") pod \"infra-operator-controller-manager-57548d458d-wr29d\" (UID: \"69747c52-1139-4d71-be0d-d6b8d534f0bf\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:18 crc kubenswrapper[4832]: I1204 06:25:18.779042 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-whgz5" Dec 04 06:25:18 crc kubenswrapper[4832]: I1204 06:25:18.788139 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:25:18 crc kubenswrapper[4832]: I1204 06:25:18.841386 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr\" (UID: \"4226c957-fd5d-4b1d-84ca-a94e76ff138c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:18 crc kubenswrapper[4832]: I1204 06:25:18.852039 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4226c957-fd5d-4b1d-84ca-a94e76ff138c-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr\" (UID: \"4226c957-fd5d-4b1d-84ca-a94e76ff138c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:19 crc kubenswrapper[4832]: I1204 06:25:19.066973 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vm7np" Dec 04 06:25:19 crc kubenswrapper[4832]: I1204 06:25:19.075400 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:25:20 crc kubenswrapper[4832]: I1204 06:25:20.572440 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:20 crc kubenswrapper[4832]: I1204 06:25:20.572555 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:20 crc kubenswrapper[4832]: I1204 06:25:20.579977 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-metrics-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:20 crc kubenswrapper[4832]: I1204 06:25:20.583711 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/57013f06-c328-4c9c-b4c9-284df662cc0e-webhook-certs\") pod \"openstack-operator-controller-manager-5986db9d67-699q9\" (UID: \"57013f06-c328-4c9c-b4c9-284df662cc0e\") " pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:20 crc kubenswrapper[4832]: I1204 06:25:20.800893 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fccjx" Dec 04 06:25:20 crc kubenswrapper[4832]: I1204 06:25:20.806996 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:20 crc kubenswrapper[4832]: E1204 06:25:20.977400 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" Dec 04 06:25:20 crc kubenswrapper[4832]: E1204 06:25:20.977612 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcw8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-wwmfh_openstack-operators(ef8f8bec-efa4-4239-839d-791aed710641): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:25:23 crc kubenswrapper[4832]: E1204 06:25:23.199366 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" Dec 04 06:25:23 crc kubenswrapper[4832]: E1204 06:25:23.199921 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5bl94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-s5wdp_openstack-operators(f17d47bc-9039-4195-bdbd-e9f58d4c305b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:25:27 crc kubenswrapper[4832]: E1204 06:25:27.749260 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 04 06:25:27 crc kubenswrapper[4832]: E1204 06:25:27.749748 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tbsnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-xd7gs_openstack-operators(84bf2c21-9b47-46f8-970e-e2e34c5d0112): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:25:28 crc kubenswrapper[4832]: E1204 06:25:28.608889 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 04 06:25:28 crc kubenswrapper[4832]: E1204 06:25:28.609071 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dxkj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-djqmz_openstack-operators(81848f9c-5ee4-4fbc-a744-701009bcbe53): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:25:33 crc kubenswrapper[4832]: I1204 06:25:33.288915 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr"] Dec 04 06:25:50 crc kubenswrapper[4832]: E1204 06:25:50.038163 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 04 06:25:50 crc kubenswrapper[4832]: E1204 06:25:50.038745 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qs5gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-zc52r_openstack-operators(f897a405-3157-4e56-b2b8-1076557cab9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:25:50 crc kubenswrapper[4832]: I1204 06:25:50.041692 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 06:25:50 crc kubenswrapper[4832]: E1204 06:25:50.651274 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" Dec 04 06:25:50 crc kubenswrapper[4832]: E1204 06:25:50.651484 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-whxcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-wjbl9_openstack-operators(cac84290-1321-4a86-a4c0-06019e9d5dfd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:25:51 crc kubenswrapper[4832]: E1204 06:25:51.113338 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" Dec 04 06:25:51 crc kubenswrapper[4832]: E1204 06:25:51.113567 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7kq76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-htcvz_openstack-operators(e6d35b26-0a9e-4174-a073-d0a608dbafcd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:25:52 crc kubenswrapper[4832]: W1204 06:25:52.412318 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4226c957_fd5d_4b1d_84ca_a94e76ff138c.slice/crio-41c70d9a033e28be965a1b7ea0982fb6be42e50b0b860f8c061a38051782835b WatchSource:0}: Error finding container 41c70d9a033e28be965a1b7ea0982fb6be42e50b0b860f8c061a38051782835b: Status 404 returned error can't find the container with id 41c70d9a033e28be965a1b7ea0982fb6be42e50b0b860f8c061a38051782835b Dec 04 06:25:52 crc kubenswrapper[4832]: I1204 06:25:52.456356 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" event={"ID":"4226c957-fd5d-4b1d-84ca-a94e76ff138c","Type":"ContainerStarted","Data":"41c70d9a033e28be965a1b7ea0982fb6be42e50b0b860f8c061a38051782835b"} Dec 04 06:25:52 crc kubenswrapper[4832]: I1204 06:25:52.664511 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-wr29d"] Dec 04 06:25:53 crc kubenswrapper[4832]: W1204 06:25:53.015589 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69747c52_1139_4d71_be0d_d6b8d534f0bf.slice/crio-e0bc0694e8137c9920663c958ff066e9424c61c8adf039bf20e402cc3b227b1b WatchSource:0}: Error finding container e0bc0694e8137c9920663c958ff066e9424c61c8adf039bf20e402cc3b227b1b: Status 404 returned error can't find the container with id e0bc0694e8137c9920663c958ff066e9424c61c8adf039bf20e402cc3b227b1b Dec 04 06:25:53 crc kubenswrapper[4832]: E1204 06:25:53.025007 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 04 06:25:53 crc kubenswrapper[4832]: E1204 06:25:53.025170 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjdl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-qx7fl_openstack-operators(626ec042-7ccd-4a54-8625-de8861efca16): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:25:53 crc kubenswrapper[4832]: E1204 06:25:53.026330 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" podUID="626ec042-7ccd-4a54-8625-de8861efca16" Dec 04 06:25:53 crc kubenswrapper[4832]: I1204 06:25:53.405079 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9"] Dec 04 06:25:53 crc kubenswrapper[4832]: I1204 06:25:53.462309 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" event={"ID":"69747c52-1139-4d71-be0d-d6b8d534f0bf","Type":"ContainerStarted","Data":"e0bc0694e8137c9920663c958ff066e9424c61c8adf039bf20e402cc3b227b1b"} Dec 04 06:25:53 crc kubenswrapper[4832]: W1204 06:25:53.529288 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57013f06_c328_4c9c_b4c9_284df662cc0e.slice/crio-e2293c7430766e8ce45c21489d1c844b7b3cf5494fe54b152bbbe71c5ced2772 WatchSource:0}: Error finding container e2293c7430766e8ce45c21489d1c844b7b3cf5494fe54b152bbbe71c5ced2772: Status 404 returned error can't find the container with id e2293c7430766e8ce45c21489d1c844b7b3cf5494fe54b152bbbe71c5ced2772 Dec 04 06:25:54 crc kubenswrapper[4832]: I1204 06:25:54.478234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc" event={"ID":"49edbb71-76d8-4f14-986d-9fd821c55ff4","Type":"ContainerStarted","Data":"c035c22715a53f94908c96d91f04eb69982d3c0c4cf7775b9ffa4685e7cc8642"} Dec 04 06:25:54 crc kubenswrapper[4832]: I1204 06:25:54.481487 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc" event={"ID":"35d20429-0e0e-4090-8d0b-9a590e8fd9ab","Type":"ContainerStarted","Data":"d578263de286d772604bef46e8306ce4722e1684f347ca564a58d77afd13307c"} Dec 04 06:25:54 crc kubenswrapper[4832]: I1204 06:25:54.491739 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx" event={"ID":"7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df","Type":"ContainerStarted","Data":"e3678437af02d50b0bc4b719031c23c8edc23ad10f402778b97e915b779b79ae"} Dec 04 06:25:54 crc kubenswrapper[4832]: I1204 06:25:54.495546 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" event={"ID":"57013f06-c328-4c9c-b4c9-284df662cc0e","Type":"ContainerStarted","Data":"e2293c7430766e8ce45c21489d1c844b7b3cf5494fe54b152bbbe71c5ced2772"} Dec 04 06:25:54 crc kubenswrapper[4832]: I1204 06:25:54.503404 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w" event={"ID":"e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f","Type":"ContainerStarted","Data":"4268ce835fa89831f0d87ec0898424607e623a4f19074e3c179e5c3a7f3deb0a"} Dec 04 06:25:54 crc kubenswrapper[4832]: I1204 06:25:54.508851 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd" event={"ID":"c0cedc81-309b-4d1f-8349-632ca9d38e96","Type":"ContainerStarted","Data":"133f93676bfbf44582c267cef2016e589069a91b0e0a11d24b9b52cd316d2223"} Dec 04 06:25:54 crc kubenswrapper[4832]: I1204 06:25:54.513038 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb" event={"ID":"2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9","Type":"ContainerStarted","Data":"df9948997a80cefe6d9b9f8bd1601f7b74f01c7c357f886fbc498cbb72caa310"} Dec 04 06:25:54 crc kubenswrapper[4832]: I1204 06:25:54.517325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr" event={"ID":"a85cdbe2-2e25-43b2-bcad-55aaf1e6755d","Type":"ContainerStarted","Data":"4a4582b2edab65f42475271e4bcfedfdb7c5d491e75d54961eacb7476586486b"} Dec 04 06:25:55 crc kubenswrapper[4832]: I1204 06:25:55.526279 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" event={"ID":"ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea","Type":"ContainerStarted","Data":"7306ad99061a4dad94b22fa88214544a1dd44495745f9c7d71576d0d02cc51d3"} Dec 04 06:25:56 crc kubenswrapper[4832]: E1204 06:25:56.821731 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:46ba3f23f1d3fb1440deeb279716e4377e79e61736ec2227270349b9618a0fdd: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:46ba3f23f1d3fb1440deeb279716e4377e79e61736ec2227270349b9618a0fdd\": context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 04 06:25:56 crc kubenswrapper[4832]: E1204 06:25:56.822473 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcw8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-wwmfh_openstack-operators(ef8f8bec-efa4-4239-839d-791aed710641): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:46ba3f23f1d3fb1440deeb279716e4377e79e61736ec2227270349b9618a0fdd: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:46ba3f23f1d3fb1440deeb279716e4377e79e61736ec2227270349b9618a0fdd\": context canceled" logger="UnhandledError" Dec 04 06:25:56 crc kubenswrapper[4832]: E1204 06:25:56.823625 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:46ba3f23f1d3fb1440deeb279716e4377e79e61736ec2227270349b9618a0fdd: Get \\\"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:46ba3f23f1d3fb1440deeb279716e4377e79e61736ec2227270349b9618a0fdd\\\": context canceled\"]" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh" podUID="ef8f8bec-efa4-4239-839d-791aed710641" Dec 04 06:25:56 crc kubenswrapper[4832]: E1204 06:25:56.833241 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 04 06:25:56 crc kubenswrapper[4832]: E1204 06:25:56.833488 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dxkj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-djqmz_openstack-operators(81848f9c-5ee4-4fbc-a744-701009bcbe53): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 04 06:25:56 crc kubenswrapper[4832]: E1204 06:25:56.834603 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz" podUID="81848f9c-5ee4-4fbc-a744-701009bcbe53" Dec 04 06:25:56 crc kubenswrapper[4832]: E1204 06:25:56.842222 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 04 06:25:56 crc kubenswrapper[4832]: E1204 06:25:56.842596 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tbsnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-xd7gs_openstack-operators(84bf2c21-9b47-46f8-970e-e2e34c5d0112): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 04 06:25:56 crc kubenswrapper[4832]: E1204 06:25:56.843938 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs" podUID="84bf2c21-9b47-46f8-970e-e2e34c5d0112" Dec 04 06:25:56 crc kubenswrapper[4832]: E1204 06:25:56.916275 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 04 06:25:56 crc kubenswrapper[4832]: E1204 06:25:56.916444 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5bl94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-s5wdp_openstack-operators(f17d47bc-9039-4195-bdbd-e9f58d4c305b): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 04 06:25:56 crc kubenswrapper[4832]: E1204 06:25:56.917723 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp" podUID="f17d47bc-9039-4195-bdbd-e9f58d4c305b" Dec 04 06:25:57 crc kubenswrapper[4832]: I1204 06:25:57.545457 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" event={"ID":"e8500aa8-6a4f-4d7b-8939-eab62a946850","Type":"ContainerStarted","Data":"da1d370f3a8c86939dd619c8eb74141a77ac1b53d97c90c32a8a88fea5664893"} Dec 04 06:25:57 crc kubenswrapper[4832]: I1204 06:25:57.549533 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" event={"ID":"7184b79e-0476-4d6d-99f3-329ad46dff61","Type":"ContainerStarted","Data":"c5c66bfc90408150633ddcc56e82d31d9aa6a18e8a57423ed0b063b7007921b9"} Dec 04 06:25:57 crc kubenswrapper[4832]: I1204 06:25:57.552434 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" event={"ID":"57013f06-c328-4c9c-b4c9-284df662cc0e","Type":"ContainerStarted","Data":"9eaff4daa47ff810172ec9969766294da9273966ed495a0ac4e66d2f88a50e34"} Dec 04 06:25:57 crc kubenswrapper[4832]: I1204 06:25:57.553627 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:25:57 crc kubenswrapper[4832]: I1204 06:25:57.557751 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" event={"ID":"63f185bd-a5f7-40a2-b51f-f60bf2c161a9","Type":"ContainerStarted","Data":"ddb2f8eb2a96047fe148f70438aa18bfe05766ed4a706b09d8261edf0f623999"} Dec 04 06:25:57 crc kubenswrapper[4832]: I1204 06:25:57.559175 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" event={"ID":"860c33f9-d57a-45b6-bc73-670d92e753a4","Type":"ContainerStarted","Data":"0952f8faad7177c4faf813560a1df1ddedf2354bfa767bff3cb92fc0256cc3c4"} Dec 04 06:25:57 crc kubenswrapper[4832]: I1204 06:25:57.616706 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" podStartSLOduration=53.616689307 podStartE2EDuration="53.616689307s" podCreationTimestamp="2025-12-04 06:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:25:57.594855131 +0000 UTC m=+1013.207672837" watchObservedRunningTime="2025-12-04 06:25:57.616689307 +0000 UTC m=+1013.229507013" Dec 04 06:25:59 crc kubenswrapper[4832]: I1204 06:25:59.609612 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" event={"ID":"4226c957-fd5d-4b1d-84ca-a94e76ff138c","Type":"ContainerStarted","Data":"dcbdf828218874aa367976751f4688068e50e1af0110f4dc7b9dff9e3e895b94"} Dec 04 06:25:59 crc kubenswrapper[4832]: I1204 06:25:59.611703 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" event={"ID":"69747c52-1139-4d71-be0d-d6b8d534f0bf","Type":"ContainerStarted","Data":"1e2ef27f528c716e61c414f493c6dc6ef01773d16084b1dc36c8a7c05dcd2835"} Dec 04 06:25:59 crc kubenswrapper[4832]: I1204 06:25:59.614536 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp" event={"ID":"f17d47bc-9039-4195-bdbd-e9f58d4c305b","Type":"ContainerStarted","Data":"ef5cb1d9aed98817104425628fcf78cfae176df403fff962da8b6030070eeeca"} Dec 04 06:25:59 crc kubenswrapper[4832]: I1204 06:25:59.615560 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz" event={"ID":"81848f9c-5ee4-4fbc-a744-701009bcbe53","Type":"ContainerStarted","Data":"69492dfc381ed06ebea43ee62f48849f4750af277f8517213b4e7680ed31d0bb"} Dec 04 06:25:59 crc kubenswrapper[4832]: I1204 06:25:59.616661 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs" event={"ID":"84bf2c21-9b47-46f8-970e-e2e34c5d0112","Type":"ContainerStarted","Data":"95593e7a4835aa9a3a7abe57ecf456226426379a894d8657d85f976d51c558ee"} Dec 04 06:25:59 crc kubenswrapper[4832]: E1204 06:25:59.672703 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" podUID="e6d35b26-0a9e-4174-a073-d0a608dbafcd" Dec 04 06:25:59 crc kubenswrapper[4832]: E1204 06:25:59.938598 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" podUID="cac84290-1321-4a86-a4c0-06019e9d5dfd" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.643516 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh" event={"ID":"ef8f8bec-efa4-4239-839d-791aed710641","Type":"ContainerStarted","Data":"bd613032e31378cedeae64e90003d953c0c84ae4f6984b69710d4ed40125d27a"} Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.652080 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w" event={"ID":"e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f","Type":"ContainerStarted","Data":"b5355735bee5280351e2d6cdf736bff658805434eb2dbb026880a48c261fd018"} Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.653299 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.657233 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.744582 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd" event={"ID":"c0cedc81-309b-4d1f-8349-632ca9d38e96","Type":"ContainerStarted","Data":"46a798e979e0a6f896308afaaccc32619f2199b603d5635c51dbc0669009ba07"} Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.744629 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.744666 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.744676 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" event={"ID":"e6d35b26-0a9e-4174-a073-d0a608dbafcd","Type":"ContainerStarted","Data":"2fd9c341d48ff95d142538b41ea77dfd881f417864a75ce5a42cec951850e7a9"} Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.744687 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" event={"ID":"cac84290-1321-4a86-a4c0-06019e9d5dfd","Type":"ContainerStarted","Data":"f19b8ea114459c8fb40ddc4e233997392eb559f24d400fd13f4020d06260969f"} Dec 04 06:26:00 crc kubenswrapper[4832]: E1204 06:26:00.746440 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" podUID="cac84290-1321-4a86-a4c0-06019e9d5dfd" Dec 04 06:26:00 crc kubenswrapper[4832]: E1204 06:26:00.750559 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" podUID="e6d35b26-0a9e-4174-a073-d0a608dbafcd" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.758662 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr" event={"ID":"a85cdbe2-2e25-43b2-bcad-55aaf1e6755d","Type":"ContainerStarted","Data":"bb2af8b8ca361bb069f239f59f9b8ed43a5fcf7532d93f548df59fb643d01415"} Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.759724 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.789687 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.825376 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx" event={"ID":"7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df","Type":"ContainerStarted","Data":"7c0fc6d053200d2cdd0c6884b8b2a074ce5d8f0b1cdbccc0957186dc9b84f46c"} Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.826485 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.828327 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" event={"ID":"4226c957-fd5d-4b1d-84ca-a94e76ff138c","Type":"ContainerStarted","Data":"fae105933c8241406a88fd1c24fbac76e25a6d244a511dd8ae8af1564bc6650a"} Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.828716 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.829830 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" event={"ID":"69747c52-1139-4d71-be0d-d6b8d534f0bf","Type":"ContainerStarted","Data":"a5dd59bcb6d456dda9224cf22026233a00029639954e1d8db81ab5663cbe4c6b"} Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.830205 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.831286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" event={"ID":"e8500aa8-6a4f-4d7b-8939-eab62a946850","Type":"ContainerStarted","Data":"0ee0a8250db6478403dee7378afad37e4f97131598a30eca1e85ba5baaee42f2"} Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.861519 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.866517 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-8qb2w" podStartSLOduration=6.867718901 podStartE2EDuration="58.866500481s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:07.110860751 +0000 UTC m=+962.723678447" lastFinishedPulling="2025-12-04 06:25:59.109642321 +0000 UTC m=+1014.722460027" observedRunningTime="2025-12-04 06:26:00.864524912 +0000 UTC m=+1016.477342608" watchObservedRunningTime="2025-12-04 06:26:00.866500481 +0000 UTC m=+1016.479318187" Dec 04 06:26:00 crc kubenswrapper[4832]: I1204 06:26:00.967204 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" podStartSLOduration=54.986159497 podStartE2EDuration="58.967188046s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:53.018653169 +0000 UTC m=+1008.631470875" lastFinishedPulling="2025-12-04 06:25:56.999681718 +0000 UTC m=+1012.612499424" observedRunningTime="2025-12-04 06:26:00.959605326 +0000 UTC m=+1016.572423032" watchObservedRunningTime="2025-12-04 06:26:00.967188046 +0000 UTC m=+1016.580005742" Dec 04 06:26:01 crc kubenswrapper[4832]: E1204 06:26:01.036081 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" podUID="f897a405-3157-4e56-b2b8-1076557cab9e" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.038548 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-zd4wx" podStartSLOduration=6.976572221 podStartE2EDuration="59.038537315s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:07.074160451 +0000 UTC m=+962.686978157" lastFinishedPulling="2025-12-04 06:25:59.136125545 +0000 UTC m=+1014.748943251" observedRunningTime="2025-12-04 06:26:01.036593306 +0000 UTC m=+1016.649411022" watchObservedRunningTime="2025-12-04 06:26:01.038537315 +0000 UTC m=+1016.651355031" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.337688 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hwpjd" podStartSLOduration=7.177070947 podStartE2EDuration="59.337673855s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:06.831518157 +0000 UTC m=+962.444335873" lastFinishedPulling="2025-12-04 06:25:58.992121065 +0000 UTC m=+1014.604938781" observedRunningTime="2025-12-04 06:26:01.332832544 +0000 UTC m=+1016.945650260" watchObservedRunningTime="2025-12-04 06:26:01.337673855 +0000 UTC m=+1016.950491561" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.357378 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjxxr" podStartSLOduration=7.022263806 podStartE2EDuration="59.357363019s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:06.851694683 +0000 UTC m=+962.464512389" lastFinishedPulling="2025-12-04 06:25:59.186793896 +0000 UTC m=+1014.799611602" observedRunningTime="2025-12-04 06:26:01.355755139 +0000 UTC m=+1016.968572845" watchObservedRunningTime="2025-12-04 06:26:01.357363019 +0000 UTC m=+1016.970180725" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.435176 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" podStartSLOduration=54.996729623 podStartE2EDuration="59.43515093s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:52.414784058 +0000 UTC m=+1008.027601764" lastFinishedPulling="2025-12-04 06:25:56.853205365 +0000 UTC m=+1012.466023071" observedRunningTime="2025-12-04 06:26:01.408030899 +0000 UTC m=+1017.020848625" watchObservedRunningTime="2025-12-04 06:26:01.43515093 +0000 UTC m=+1017.047968636" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.509032 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" podStartSLOduration=7.580294368 podStartE2EDuration="59.509011001s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:07.173485361 +0000 UTC m=+962.786303067" lastFinishedPulling="2025-12-04 06:25:59.102201994 +0000 UTC m=+1014.715019700" observedRunningTime="2025-12-04 06:26:01.4989958 +0000 UTC m=+1017.111813516" watchObservedRunningTime="2025-12-04 06:26:01.509011001 +0000 UTC m=+1017.121828717" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.896666 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz" event={"ID":"81848f9c-5ee4-4fbc-a744-701009bcbe53","Type":"ContainerStarted","Data":"8da21c60420af4cdc69b3864ae19371e4a3c58c29b207e32a604096edd2bde42"} Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.897612 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.899114 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" event={"ID":"ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea","Type":"ContainerStarted","Data":"92689ff6a46ee96f9b90725ba167235eca6243d3a2276ccdeff408aa3f96df55"} Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.900688 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.903683 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" event={"ID":"63f185bd-a5f7-40a2-b51f-f60bf2c161a9","Type":"ContainerStarted","Data":"f23ddff9c22f53e5f63d942fb814fcfb30fb340d619594898e35350cdca072a9"} Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.904950 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.906503 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.907045 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp" event={"ID":"f17d47bc-9039-4195-bdbd-e9f58d4c305b","Type":"ContainerStarted","Data":"224af3ca40b451de530a91e824c70220f00ee24cb0c071e47cad3d2f06696cdc"} Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.907554 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.909065 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" event={"ID":"860c33f9-d57a-45b6-bc73-670d92e753a4","Type":"ContainerStarted","Data":"bbc7358562358660e2cb91cebbbf34d4ed8db4868be8e2bd3d9debf2be3e9971"} Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.910310 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.910589 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.911299 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.911772 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" event={"ID":"f897a405-3157-4e56-b2b8-1076557cab9e","Type":"ContainerStarted","Data":"e2a810cbf15df08ab63060eac7cd80aed4dd6dcb539bfab058be3ec533f33c79"} Dec 04 06:26:01 crc kubenswrapper[4832]: E1204 06:26:01.913103 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" podUID="f897a405-3157-4e56-b2b8-1076557cab9e" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.914835 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs" event={"ID":"84bf2c21-9b47-46f8-970e-e2e34c5d0112","Type":"ContainerStarted","Data":"9bfb8951f83f73130048acdb78fd8e134965c79824e7ba4036495df7ca0493f7"} Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.915062 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.916723 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb" event={"ID":"2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9","Type":"ContainerStarted","Data":"204fffb9ddcaf645bfc9f509757e8eaf1eb79faa43723cc61ad94e4f6459293a"} Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.916938 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.918779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc" event={"ID":"49edbb71-76d8-4f14-986d-9fd821c55ff4","Type":"ContainerStarted","Data":"b3ccab18ccbc777d5dd7fb967aaac6160a6ce965fb307e7e530ba69848dcd00c"} Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.919437 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.920989 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc" event={"ID":"35d20429-0e0e-4090-8d0b-9a590e8fd9ab","Type":"ContainerStarted","Data":"38c594f6ea454354d40578ae40bf0080b43d36e827a385cea9dff0810a38e66a"} Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.921506 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.921673 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.923515 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" event={"ID":"7184b79e-0476-4d6d-99f3-329ad46dff61","Type":"ContainerStarted","Data":"4d6b0d4528efc950b23c0b8b08d545fdf2af167795a329eb1349f2d80ce08a8d"} Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.923560 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.925587 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh" event={"ID":"ef8f8bec-efa4-4239-839d-791aed710641","Type":"ContainerStarted","Data":"a6391d66f8534fb66390895a5147ac1607ca8458be3aed836bb13a3c6305c0ac"} Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.925613 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.927990 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.930048 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.930625 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-zgnkq" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.931850 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz" podStartSLOduration=7.978206555 podStartE2EDuration="59.931834022s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:06.868667049 +0000 UTC m=+962.481484755" lastFinishedPulling="2025-12-04 06:25:58.822294526 +0000 UTC m=+1014.435112222" observedRunningTime="2025-12-04 06:26:01.928412967 +0000 UTC m=+1017.541230673" watchObservedRunningTime="2025-12-04 06:26:01.931834022 +0000 UTC m=+1017.544651728" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.952814 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6shfb" podStartSLOduration=7.688702525 podStartE2EDuration="59.952798108s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:06.888946267 +0000 UTC m=+962.501763973" lastFinishedPulling="2025-12-04 06:25:59.15304185 +0000 UTC m=+1014.765859556" observedRunningTime="2025-12-04 06:26:01.947027704 +0000 UTC m=+1017.559845420" watchObservedRunningTime="2025-12-04 06:26:01.952798108 +0000 UTC m=+1017.565615814" Dec 04 06:26:01 crc kubenswrapper[4832]: I1204 06:26:01.968827 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc" Dec 04 06:26:02 crc kubenswrapper[4832]: I1204 06:26:02.022011 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-dr2cc" podStartSLOduration=7.914292673 podStartE2EDuration="1m0.021989924s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:07.119038356 +0000 UTC m=+962.731856062" lastFinishedPulling="2025-12-04 06:25:59.226735607 +0000 UTC m=+1014.839553313" observedRunningTime="2025-12-04 06:26:02.021263345 +0000 UTC m=+1017.634081051" watchObservedRunningTime="2025-12-04 06:26:02.021989924 +0000 UTC m=+1017.634807630" Dec 04 06:26:02 crc kubenswrapper[4832]: I1204 06:26:02.041342 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-7x9qz" podStartSLOduration=7.993032728 podStartE2EDuration="1m0.041321888s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:07.145917541 +0000 UTC m=+962.758735247" lastFinishedPulling="2025-12-04 06:25:59.194206701 +0000 UTC m=+1014.807024407" observedRunningTime="2025-12-04 06:26:02.038065656 +0000 UTC m=+1017.650883372" watchObservedRunningTime="2025-12-04 06:26:02.041321888 +0000 UTC m=+1017.654139594" Dec 04 06:26:02 crc kubenswrapper[4832]: I1204 06:26:02.196013 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh" podStartSLOduration=7.838802379 podStartE2EDuration="1m0.195990256s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:06.829279071 +0000 UTC m=+962.442096777" lastFinishedPulling="2025-12-04 06:25:59.186466938 +0000 UTC m=+1014.799284654" observedRunningTime="2025-12-04 06:26:02.164454906 +0000 UTC m=+1017.777272622" watchObservedRunningTime="2025-12-04 06:26:02.195990256 +0000 UTC m=+1017.808807962" Dec 04 06:26:02 crc kubenswrapper[4832]: I1204 06:26:02.198473 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lbvnf" podStartSLOduration=8.198161959 podStartE2EDuration="1m0.198463318s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:07.152758971 +0000 UTC m=+962.765576677" lastFinishedPulling="2025-12-04 06:25:59.15306032 +0000 UTC m=+1014.765878036" observedRunningTime="2025-12-04 06:26:02.1905529 +0000 UTC m=+1017.803370606" watchObservedRunningTime="2025-12-04 06:26:02.198463318 +0000 UTC m=+1017.811281024" Dec 04 06:26:02 crc kubenswrapper[4832]: I1204 06:26:02.220216 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" podStartSLOduration=8.143696314 podStartE2EDuration="1m0.220199733s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:07.151142181 +0000 UTC m=+962.763959887" lastFinishedPulling="2025-12-04 06:25:59.2276456 +0000 UTC m=+1014.840463306" observedRunningTime="2025-12-04 06:26:02.216238173 +0000 UTC m=+1017.829055899" watchObservedRunningTime="2025-12-04 06:26:02.220199733 +0000 UTC m=+1017.833017439" Dec 04 06:26:02 crc kubenswrapper[4832]: I1204 06:26:02.234560 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-fl4dk" podStartSLOduration=8.103168448 podStartE2EDuration="1m0.234549183s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:07.176004774 +0000 UTC m=+962.788822480" lastFinishedPulling="2025-12-04 06:25:59.307385509 +0000 UTC m=+1014.920203215" observedRunningTime="2025-12-04 06:26:02.232697396 +0000 UTC m=+1017.845515122" watchObservedRunningTime="2025-12-04 06:26:02.234549183 +0000 UTC m=+1017.847366889" Dec 04 06:26:02 crc kubenswrapper[4832]: I1204 06:26:02.266821 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp" podStartSLOduration=8.191813311 podStartE2EDuration="1m0.266800962s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:06.852834091 +0000 UTC m=+962.465651797" lastFinishedPulling="2025-12-04 06:25:58.927821742 +0000 UTC m=+1014.540639448" observedRunningTime="2025-12-04 06:26:02.261362895 +0000 UTC m=+1017.874180611" watchObservedRunningTime="2025-12-04 06:26:02.266800962 +0000 UTC m=+1017.879618668" Dec 04 06:26:02 crc kubenswrapper[4832]: I1204 06:26:02.323552 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs" podStartSLOduration=8.283583782000001 podStartE2EDuration="1m0.323536984s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:06.888487416 +0000 UTC m=+962.501305122" lastFinishedPulling="2025-12-04 06:25:58.928440618 +0000 UTC m=+1014.541258324" observedRunningTime="2025-12-04 06:26:02.322527698 +0000 UTC m=+1017.935345394" watchObservedRunningTime="2025-12-04 06:26:02.323536984 +0000 UTC m=+1017.936354690" Dec 04 06:26:02 crc kubenswrapper[4832]: I1204 06:26:02.343290 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-9cmtc" podStartSLOduration=8.091037634 podStartE2EDuration="1m0.343269189s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:06.877803227 +0000 UTC m=+962.490620933" lastFinishedPulling="2025-12-04 06:25:59.130034782 +0000 UTC m=+1014.742852488" observedRunningTime="2025-12-04 06:26:02.339315149 +0000 UTC m=+1017.952132855" watchObservedRunningTime="2025-12-04 06:26:02.343269189 +0000 UTC m=+1017.956086895" Dec 04 06:26:02 crc kubenswrapper[4832]: I1204 06:26:02.934143 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-lr247" Dec 04 06:26:07 crc kubenswrapper[4832]: E1204 06:26:07.714814 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" podUID="626ec042-7ccd-4a54-8625-de8861efca16" Dec 04 06:26:08 crc kubenswrapper[4832]: I1204 06:26:08.795198 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wr29d" Dec 04 06:26:09 crc kubenswrapper[4832]: I1204 06:26:09.081653 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr" Dec 04 06:26:10 crc kubenswrapper[4832]: I1204 06:26:10.813577 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5986db9d67-699q9" Dec 04 06:26:12 crc kubenswrapper[4832]: I1204 06:26:12.701137 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wwmfh" Dec 04 06:26:12 crc kubenswrapper[4832]: I1204 06:26:12.791583 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-s5wdp" Dec 04 06:26:13 crc kubenswrapper[4832]: I1204 06:26:13.117797 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xd7gs" Dec 04 06:26:13 crc kubenswrapper[4832]: I1204 06:26:13.378815 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-djqmz" Dec 04 06:26:14 crc kubenswrapper[4832]: I1204 06:26:14.007084 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" event={"ID":"f897a405-3157-4e56-b2b8-1076557cab9e","Type":"ContainerStarted","Data":"39bb49e86bc3d446efd3efeabfb72b1b716f3907c7dbb1f7f7d01e41647d5ab0"} Dec 04 06:26:14 crc kubenswrapper[4832]: I1204 06:26:14.007628 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" Dec 04 06:26:14 crc kubenswrapper[4832]: I1204 06:26:14.009140 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" event={"ID":"e6d35b26-0a9e-4174-a073-d0a608dbafcd","Type":"ContainerStarted","Data":"03ec9c2cb2ba8c10489c6961b51ca038ca02dc3f7712653d2ffbbcca7a7ace16"} Dec 04 06:26:14 crc kubenswrapper[4832]: I1204 06:26:14.009332 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" Dec 04 06:26:14 crc kubenswrapper[4832]: I1204 06:26:14.030203 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" podStartSLOduration=5.669212191 podStartE2EDuration="1m12.030186088s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:07.148591687 +0000 UTC m=+962.761409393" lastFinishedPulling="2025-12-04 06:26:13.509565584 +0000 UTC m=+1029.122383290" observedRunningTime="2025-12-04 06:26:14.025336727 +0000 UTC m=+1029.638154433" watchObservedRunningTime="2025-12-04 06:26:14.030186088 +0000 UTC m=+1029.643003794" Dec 04 06:26:14 crc kubenswrapper[4832]: I1204 06:26:14.046192 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" podStartSLOduration=4.683012487 podStartE2EDuration="1m11.046174319s" podCreationTimestamp="2025-12-04 06:25:03 +0000 UTC" firstStartedPulling="2025-12-04 06:25:07.147586442 +0000 UTC m=+962.760404148" lastFinishedPulling="2025-12-04 06:26:13.510748274 +0000 UTC m=+1029.123565980" observedRunningTime="2025-12-04 06:26:14.039677516 +0000 UTC m=+1029.652495222" watchObservedRunningTime="2025-12-04 06:26:14.046174319 +0000 UTC m=+1029.658992025" Dec 04 06:26:15 crc kubenswrapper[4832]: I1204 06:26:15.017594 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" event={"ID":"cac84290-1321-4a86-a4c0-06019e9d5dfd","Type":"ContainerStarted","Data":"516f77d2d26ade6592016d8493630f8d0d07852ec55e708bc50cad078187bbd4"} Dec 04 06:26:15 crc kubenswrapper[4832]: I1204 06:26:15.017930 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" Dec 04 06:26:15 crc kubenswrapper[4832]: I1204 06:26:15.036411 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" podStartSLOduration=5.976635699 podStartE2EDuration="1m13.036382457s" podCreationTimestamp="2025-12-04 06:25:02 +0000 UTC" firstStartedPulling="2025-12-04 06:25:07.173181904 +0000 UTC m=+962.785999610" lastFinishedPulling="2025-12-04 06:26:14.232928662 +0000 UTC m=+1029.845746368" observedRunningTime="2025-12-04 06:26:15.033455944 +0000 UTC m=+1030.646273660" watchObservedRunningTime="2025-12-04 06:26:15.036382457 +0000 UTC m=+1030.649200163" Dec 04 06:26:20 crc kubenswrapper[4832]: I1204 06:26:20.132178 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" event={"ID":"626ec042-7ccd-4a54-8625-de8861efca16","Type":"ContainerStarted","Data":"11f39083975b561f302dadb2fe70ddb0ca48e0a8ab9b368d3986336db3a651c4"} Dec 04 06:26:20 crc kubenswrapper[4832]: I1204 06:26:20.157913 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qx7fl" podStartSLOduration=4.20354255 podStartE2EDuration="1m16.157889784s" podCreationTimestamp="2025-12-04 06:25:04 +0000 UTC" firstStartedPulling="2025-12-04 06:25:07.153928291 +0000 UTC m=+962.766745997" lastFinishedPulling="2025-12-04 06:26:19.108275505 +0000 UTC m=+1034.721093231" observedRunningTime="2025-12-04 06:26:20.152361735 +0000 UTC m=+1035.765179441" watchObservedRunningTime="2025-12-04 06:26:20.157889784 +0000 UTC m=+1035.770707510" Dec 04 06:26:24 crc kubenswrapper[4832]: I1204 06:26:24.224375 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-wjbl9" Dec 04 06:26:24 crc kubenswrapper[4832]: I1204 06:26:24.289069 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zc52r" Dec 04 06:26:24 crc kubenswrapper[4832]: I1204 06:26:24.539440 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-htcvz" Dec 04 06:26:35 crc kubenswrapper[4832]: I1204 06:26:35.362854 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:26:35 crc kubenswrapper[4832]: I1204 06:26:35.363431 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.372561 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wbxp6"] Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.374476 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.377359 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-9jvll" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.378032 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.378057 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.378204 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.401194 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wbxp6"] Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.494054 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h82q6\" (UniqueName: \"kubernetes.io/projected/a8063bac-d76a-41ad-8438-0a3bdb0f727d-kube-api-access-h82q6\") pod \"dnsmasq-dns-675f4bcbfc-wbxp6\" (UID: \"a8063bac-d76a-41ad-8438-0a3bdb0f727d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.494372 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8063bac-d76a-41ad-8438-0a3bdb0f727d-config\") pod \"dnsmasq-dns-675f4bcbfc-wbxp6\" (UID: \"a8063bac-d76a-41ad-8438-0a3bdb0f727d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.504254 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zxs7b"] Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.507816 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.510341 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.520067 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zxs7b"] Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.595838 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h82q6\" (UniqueName: \"kubernetes.io/projected/a8063bac-d76a-41ad-8438-0a3bdb0f727d-kube-api-access-h82q6\") pod \"dnsmasq-dns-675f4bcbfc-wbxp6\" (UID: \"a8063bac-d76a-41ad-8438-0a3bdb0f727d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.595925 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8063bac-d76a-41ad-8438-0a3bdb0f727d-config\") pod \"dnsmasq-dns-675f4bcbfc-wbxp6\" (UID: \"a8063bac-d76a-41ad-8438-0a3bdb0f727d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.596959 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8063bac-d76a-41ad-8438-0a3bdb0f727d-config\") pod \"dnsmasq-dns-675f4bcbfc-wbxp6\" (UID: \"a8063bac-d76a-41ad-8438-0a3bdb0f727d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.618018 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h82q6\" (UniqueName: \"kubernetes.io/projected/a8063bac-d76a-41ad-8438-0a3bdb0f727d-kube-api-access-h82q6\") pod \"dnsmasq-dns-675f4bcbfc-wbxp6\" (UID: \"a8063bac-d76a-41ad-8438-0a3bdb0f727d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.696903 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.698134 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p5mk\" (UniqueName: \"kubernetes.io/projected/dc7db8e8-4255-4c40-8777-1961aeadb752-kube-api-access-6p5mk\") pod \"dnsmasq-dns-78dd6ddcc-zxs7b\" (UID: \"dc7db8e8-4255-4c40-8777-1961aeadb752\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.698171 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc7db8e8-4255-4c40-8777-1961aeadb752-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zxs7b\" (UID: \"dc7db8e8-4255-4c40-8777-1961aeadb752\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.698196 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc7db8e8-4255-4c40-8777-1961aeadb752-config\") pod \"dnsmasq-dns-78dd6ddcc-zxs7b\" (UID: \"dc7db8e8-4255-4c40-8777-1961aeadb752\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.844557 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p5mk\" (UniqueName: \"kubernetes.io/projected/dc7db8e8-4255-4c40-8777-1961aeadb752-kube-api-access-6p5mk\") pod \"dnsmasq-dns-78dd6ddcc-zxs7b\" (UID: \"dc7db8e8-4255-4c40-8777-1961aeadb752\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.844620 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc7db8e8-4255-4c40-8777-1961aeadb752-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zxs7b\" (UID: \"dc7db8e8-4255-4c40-8777-1961aeadb752\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.844648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc7db8e8-4255-4c40-8777-1961aeadb752-config\") pod \"dnsmasq-dns-78dd6ddcc-zxs7b\" (UID: \"dc7db8e8-4255-4c40-8777-1961aeadb752\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.845894 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc7db8e8-4255-4c40-8777-1961aeadb752-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zxs7b\" (UID: \"dc7db8e8-4255-4c40-8777-1961aeadb752\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.845938 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc7db8e8-4255-4c40-8777-1961aeadb752-config\") pod \"dnsmasq-dns-78dd6ddcc-zxs7b\" (UID: \"dc7db8e8-4255-4c40-8777-1961aeadb752\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:26:42 crc kubenswrapper[4832]: I1204 06:26:42.876719 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p5mk\" (UniqueName: \"kubernetes.io/projected/dc7db8e8-4255-4c40-8777-1961aeadb752-kube-api-access-6p5mk\") pod \"dnsmasq-dns-78dd6ddcc-zxs7b\" (UID: \"dc7db8e8-4255-4c40-8777-1961aeadb752\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:26:43 crc kubenswrapper[4832]: I1204 06:26:43.129210 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:26:43 crc kubenswrapper[4832]: I1204 06:26:43.218726 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wbxp6"] Dec 04 06:26:43 crc kubenswrapper[4832]: I1204 06:26:43.428254 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" event={"ID":"a8063bac-d76a-41ad-8438-0a3bdb0f727d","Type":"ContainerStarted","Data":"678b4444fa8f9df39bf6f754cf6b0ed62aa702e1d468d446de04f81e6159d302"} Dec 04 06:26:43 crc kubenswrapper[4832]: I1204 06:26:43.588728 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zxs7b"] Dec 04 06:26:43 crc kubenswrapper[4832]: W1204 06:26:43.590009 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc7db8e8_4255_4c40_8777_1961aeadb752.slice/crio-4b517b1de7a58c19e505466209a076b1e5111eb39ab07ecbce50ca3457614597 WatchSource:0}: Error finding container 4b517b1de7a58c19e505466209a076b1e5111eb39ab07ecbce50ca3457614597: Status 404 returned error can't find the container with id 4b517b1de7a58c19e505466209a076b1e5111eb39ab07ecbce50ca3457614597 Dec 04 06:26:44 crc kubenswrapper[4832]: I1204 06:26:44.438679 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" event={"ID":"dc7db8e8-4255-4c40-8777-1961aeadb752","Type":"ContainerStarted","Data":"4b517b1de7a58c19e505466209a076b1e5111eb39ab07ecbce50ca3457614597"} Dec 04 06:26:45 crc kubenswrapper[4832]: I1204 06:26:45.920005 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wbxp6"] Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.042051 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tsbps"] Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.044203 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.138801 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef95dc8a-7c26-4053-86e7-8e5436a60482-dns-svc\") pod \"dnsmasq-dns-666b6646f7-tsbps\" (UID: \"ef95dc8a-7c26-4053-86e7-8e5436a60482\") " pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.139010 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef95dc8a-7c26-4053-86e7-8e5436a60482-config\") pod \"dnsmasq-dns-666b6646f7-tsbps\" (UID: \"ef95dc8a-7c26-4053-86e7-8e5436a60482\") " pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.139045 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbqt7\" (UniqueName: \"kubernetes.io/projected/ef95dc8a-7c26-4053-86e7-8e5436a60482-kube-api-access-rbqt7\") pod \"dnsmasq-dns-666b6646f7-tsbps\" (UID: \"ef95dc8a-7c26-4053-86e7-8e5436a60482\") " pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.149507 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tsbps"] Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.241141 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef95dc8a-7c26-4053-86e7-8e5436a60482-dns-svc\") pod \"dnsmasq-dns-666b6646f7-tsbps\" (UID: \"ef95dc8a-7c26-4053-86e7-8e5436a60482\") " pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.241224 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef95dc8a-7c26-4053-86e7-8e5436a60482-config\") pod \"dnsmasq-dns-666b6646f7-tsbps\" (UID: \"ef95dc8a-7c26-4053-86e7-8e5436a60482\") " pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.241250 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbqt7\" (UniqueName: \"kubernetes.io/projected/ef95dc8a-7c26-4053-86e7-8e5436a60482-kube-api-access-rbqt7\") pod \"dnsmasq-dns-666b6646f7-tsbps\" (UID: \"ef95dc8a-7c26-4053-86e7-8e5436a60482\") " pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.242842 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef95dc8a-7c26-4053-86e7-8e5436a60482-config\") pod \"dnsmasq-dns-666b6646f7-tsbps\" (UID: \"ef95dc8a-7c26-4053-86e7-8e5436a60482\") " pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.243227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef95dc8a-7c26-4053-86e7-8e5436a60482-dns-svc\") pod \"dnsmasq-dns-666b6646f7-tsbps\" (UID: \"ef95dc8a-7c26-4053-86e7-8e5436a60482\") " pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.286590 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbqt7\" (UniqueName: \"kubernetes.io/projected/ef95dc8a-7c26-4053-86e7-8e5436a60482-kube-api-access-rbqt7\") pod \"dnsmasq-dns-666b6646f7-tsbps\" (UID: \"ef95dc8a-7c26-4053-86e7-8e5436a60482\") " pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.420729 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.729426 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zxs7b"] Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.764730 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gcn6q"] Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.767991 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.789678 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bx5t\" (UniqueName: \"kubernetes.io/projected/ab377657-262b-4d67-8277-66bbc01db3dd-kube-api-access-4bx5t\") pod \"dnsmasq-dns-57d769cc4f-gcn6q\" (UID: \"ab377657-262b-4d67-8277-66bbc01db3dd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.789755 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab377657-262b-4d67-8277-66bbc01db3dd-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gcn6q\" (UID: \"ab377657-262b-4d67-8277-66bbc01db3dd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.789814 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab377657-262b-4d67-8277-66bbc01db3dd-config\") pod \"dnsmasq-dns-57d769cc4f-gcn6q\" (UID: \"ab377657-262b-4d67-8277-66bbc01db3dd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.857601 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gcn6q"] Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.892159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab377657-262b-4d67-8277-66bbc01db3dd-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gcn6q\" (UID: \"ab377657-262b-4d67-8277-66bbc01db3dd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.892262 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab377657-262b-4d67-8277-66bbc01db3dd-config\") pod \"dnsmasq-dns-57d769cc4f-gcn6q\" (UID: \"ab377657-262b-4d67-8277-66bbc01db3dd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.892299 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bx5t\" (UniqueName: \"kubernetes.io/projected/ab377657-262b-4d67-8277-66bbc01db3dd-kube-api-access-4bx5t\") pod \"dnsmasq-dns-57d769cc4f-gcn6q\" (UID: \"ab377657-262b-4d67-8277-66bbc01db3dd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.893584 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab377657-262b-4d67-8277-66bbc01db3dd-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gcn6q\" (UID: \"ab377657-262b-4d67-8277-66bbc01db3dd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.893955 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab377657-262b-4d67-8277-66bbc01db3dd-config\") pod \"dnsmasq-dns-57d769cc4f-gcn6q\" (UID: \"ab377657-262b-4d67-8277-66bbc01db3dd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:26:46 crc kubenswrapper[4832]: I1204 06:26:46.946223 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bx5t\" (UniqueName: \"kubernetes.io/projected/ab377657-262b-4d67-8277-66bbc01db3dd-kube-api-access-4bx5t\") pod \"dnsmasq-dns-57d769cc4f-gcn6q\" (UID: \"ab377657-262b-4d67-8277-66bbc01db3dd\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.120130 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.324571 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.326069 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.339656 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.400513 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.400743 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.411700 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.411777 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-b2fcj" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.411934 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.412079 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.538116 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.618227 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.618276 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.618299 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.618335 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.618382 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.618409 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.618428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.618450 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.618465 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.618491 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pvjv\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-kube-api-access-5pvjv\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.618508 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.720338 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.720424 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.720454 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.720482 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.720536 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.720586 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.720609 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.720639 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.720671 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.720691 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.720723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pvjv\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-kube-api-access-5pvjv\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.722557 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.726695 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.727777 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.728040 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.728612 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-config-data\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.729032 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.736634 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.739286 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.746857 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.758848 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.773108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pvjv\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-kube-api-access-5pvjv\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.784892 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tsbps"] Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.813702 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: W1204 06:26:47.843677 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef95dc8a_7c26_4053_86e7_8e5436a60482.slice/crio-432ca473398559dedca75b166de0c18d050124a08786b6f53b63deec267fa637 WatchSource:0}: Error finding container 432ca473398559dedca75b166de0c18d050124a08786b6f53b63deec267fa637: Status 404 returned error can't find the container with id 432ca473398559dedca75b166de0c18d050124a08786b6f53b63deec267fa637 Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.953468 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.983807 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.985456 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.988797 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.990021 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.990149 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.990184 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.990806 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.990943 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.991046 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gzxvp" Dec 04 06:26:47 crc kubenswrapper[4832]: I1204 06:26:47.991104 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.079619 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gcn6q"] Dec 04 06:26:48 crc kubenswrapper[4832]: W1204 06:26:48.095837 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab377657_262b_4d67_8277_66bbc01db3dd.slice/crio-bd21c35ea4b7644e336a399adcc83de1de0584dba4f0b25d2cb75d4965a742c6 WatchSource:0}: Error finding container bd21c35ea4b7644e336a399adcc83de1de0584dba4f0b25d2cb75d4965a742c6: Status 404 returned error can't find the container with id bd21c35ea4b7644e336a399adcc83de1de0584dba4f0b25d2cb75d4965a742c6 Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.129097 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.129228 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.129285 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.129441 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.129534 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d41c5c2-5373-423b-b14f-00c902111ee3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.129633 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.129706 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.129733 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.129751 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d41c5c2-5373-423b-b14f-00c902111ee3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.129827 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.130032 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfkdz\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-kube-api-access-xfkdz\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.231265 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfkdz\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-kube-api-access-xfkdz\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.231646 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.231675 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.231695 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.231721 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.231744 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d41c5c2-5373-423b-b14f-00c902111ee3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.231855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.231904 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.231927 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.231943 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d41c5c2-5373-423b-b14f-00c902111ee3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.231994 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.232288 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.232544 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.232888 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.233831 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.234315 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.236901 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.237722 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d41c5c2-5373-423b-b14f-00c902111ee3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.237851 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.238018 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d41c5c2-5373-423b-b14f-00c902111ee3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.241636 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.274333 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfkdz\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-kube-api-access-xfkdz\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.301439 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.316293 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.707620 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 06:26:48 crc kubenswrapper[4832]: W1204 06:26:48.739101 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ee9dc35_7baf_448f_a6fc_3f73c1b5d6f3.slice/crio-98376ab0a149a0dbcf39f8f680d48fa1374dc2756604bfc47f99019c2ed2cbfe WatchSource:0}: Error finding container 98376ab0a149a0dbcf39f8f680d48fa1374dc2756604bfc47f99019c2ed2cbfe: Status 404 returned error can't find the container with id 98376ab0a149a0dbcf39f8f680d48fa1374dc2756604bfc47f99019c2ed2cbfe Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.743833 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.745582 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.752637 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-p89bp" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.752824 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.753035 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.753166 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.764437 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.772432 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.773863 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-tsbps" event={"ID":"ef95dc8a-7c26-4053-86e7-8e5436a60482","Type":"ContainerStarted","Data":"432ca473398559dedca75b166de0c18d050124a08786b6f53b63deec267fa637"} Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.776888 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" event={"ID":"ab377657-262b-4d67-8277-66bbc01db3dd","Type":"ContainerStarted","Data":"bd21c35ea4b7644e336a399adcc83de1de0584dba4f0b25d2cb75d4965a742c6"} Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.863849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22fcd5ed-0004-4329-b8c6-7855939765dc-kolla-config\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.863898 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z56k\" (UniqueName: \"kubernetes.io/projected/22fcd5ed-0004-4329-b8c6-7855939765dc-kube-api-access-6z56k\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.863943 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/22fcd5ed-0004-4329-b8c6-7855939765dc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.863984 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fcd5ed-0004-4329-b8c6-7855939765dc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.864023 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/22fcd5ed-0004-4329-b8c6-7855939765dc-config-data-default\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.864050 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fcd5ed-0004-4329-b8c6-7855939765dc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.864074 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.864094 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/22fcd5ed-0004-4329-b8c6-7855939765dc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.965636 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22fcd5ed-0004-4329-b8c6-7855939765dc-kolla-config\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.965723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z56k\" (UniqueName: \"kubernetes.io/projected/22fcd5ed-0004-4329-b8c6-7855939765dc-kube-api-access-6z56k\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.965799 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/22fcd5ed-0004-4329-b8c6-7855939765dc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.965847 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fcd5ed-0004-4329-b8c6-7855939765dc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.965910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/22fcd5ed-0004-4329-b8c6-7855939765dc-config-data-default\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.965944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fcd5ed-0004-4329-b8c6-7855939765dc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.966001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.966030 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/22fcd5ed-0004-4329-b8c6-7855939765dc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.966492 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22fcd5ed-0004-4329-b8c6-7855939765dc-kolla-config\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.966685 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/22fcd5ed-0004-4329-b8c6-7855939765dc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.967251 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/22fcd5ed-0004-4329-b8c6-7855939765dc-config-data-default\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.967357 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.969020 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fcd5ed-0004-4329-b8c6-7855939765dc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.973977 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/22fcd5ed-0004-4329-b8c6-7855939765dc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.974444 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fcd5ed-0004-4329-b8c6-7855939765dc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:48 crc kubenswrapper[4832]: I1204 06:26:48.990232 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z56k\" (UniqueName: \"kubernetes.io/projected/22fcd5ed-0004-4329-b8c6-7855939765dc-kube-api-access-6z56k\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.002111 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"22fcd5ed-0004-4329-b8c6-7855939765dc\") " pod="openstack/openstack-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.126907 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.163501 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 06:26:49 crc kubenswrapper[4832]: W1204 06:26:49.204642 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d41c5c2_5373_423b_b14f_00c902111ee3.slice/crio-68f3ab45862f4bd190c9f4d90c8ea2c64006a6c1ad29c64ad70b40c5c740f66a WatchSource:0}: Error finding container 68f3ab45862f4bd190c9f4d90c8ea2c64006a6c1ad29c64ad70b40c5c740f66a: Status 404 returned error can't find the container with id 68f3ab45862f4bd190c9f4d90c8ea2c64006a6c1ad29c64ad70b40c5c740f66a Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.412115 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.416738 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.420804 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.421876 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6h9c6" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.421935 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.427562 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.478826 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.575113 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmc6q\" (UniqueName: \"kubernetes.io/projected/9841a1c2-83f5-475b-8180-b1e9cd13467b-kube-api-access-zmc6q\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.575177 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9841a1c2-83f5-475b-8180-b1e9cd13467b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.575252 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9841a1c2-83f5-475b-8180-b1e9cd13467b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.575295 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9841a1c2-83f5-475b-8180-b1e9cd13467b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.575317 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9841a1c2-83f5-475b-8180-b1e9cd13467b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.575338 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9841a1c2-83f5-475b-8180-b1e9cd13467b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.575367 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.575524 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9841a1c2-83f5-475b-8180-b1e9cd13467b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.679942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmc6q\" (UniqueName: \"kubernetes.io/projected/9841a1c2-83f5-475b-8180-b1e9cd13467b-kube-api-access-zmc6q\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.679985 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9841a1c2-83f5-475b-8180-b1e9cd13467b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.680049 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9841a1c2-83f5-475b-8180-b1e9cd13467b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.680084 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9841a1c2-83f5-475b-8180-b1e9cd13467b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.680099 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9841a1c2-83f5-475b-8180-b1e9cd13467b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.680116 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9841a1c2-83f5-475b-8180-b1e9cd13467b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.680133 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.680172 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9841a1c2-83f5-475b-8180-b1e9cd13467b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.682217 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9841a1c2-83f5-475b-8180-b1e9cd13467b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.683700 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9841a1c2-83f5-475b-8180-b1e9cd13467b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.684491 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9841a1c2-83f5-475b-8180-b1e9cd13467b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.684714 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.685115 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9841a1c2-83f5-475b-8180-b1e9cd13467b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.689717 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9841a1c2-83f5-475b-8180-b1e9cd13467b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.700741 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9841a1c2-83f5-475b-8180-b1e9cd13467b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.720673 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmc6q\" (UniqueName: \"kubernetes.io/projected/9841a1c2-83f5-475b-8180-b1e9cd13467b-kube-api-access-zmc6q\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.750748 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9841a1c2-83f5-475b-8180-b1e9cd13467b\") " pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.764284 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.818607 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1d41c5c2-5373-423b-b14f-00c902111ee3","Type":"ContainerStarted","Data":"68f3ab45862f4bd190c9f4d90c8ea2c64006a6c1ad29c64ad70b40c5c740f66a"} Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.821013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3","Type":"ContainerStarted","Data":"98376ab0a149a0dbcf39f8f680d48fa1374dc2756604bfc47f99019c2ed2cbfe"} Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.828534 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.829494 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.834899 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.835136 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-7tw2g" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.835297 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 04 06:26:49 crc kubenswrapper[4832]: I1204 06:26:49.883420 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.005567 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3b075a1-3f92-493c-93d2-a776141dba44-config-data\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.005618 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b075a1-3f92-493c-93d2-a776141dba44-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.005657 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3b075a1-3f92-493c-93d2-a776141dba44-kolla-config\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.005753 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2nrl\" (UniqueName: \"kubernetes.io/projected/e3b075a1-3f92-493c-93d2-a776141dba44-kube-api-access-p2nrl\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.005778 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b075a1-3f92-493c-93d2-a776141dba44-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.106892 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3b075a1-3f92-493c-93d2-a776141dba44-kolla-config\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.106971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2nrl\" (UniqueName: \"kubernetes.io/projected/e3b075a1-3f92-493c-93d2-a776141dba44-kube-api-access-p2nrl\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.106999 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b075a1-3f92-493c-93d2-a776141dba44-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.107042 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3b075a1-3f92-493c-93d2-a776141dba44-config-data\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.107070 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b075a1-3f92-493c-93d2-a776141dba44-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.108849 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3b075a1-3f92-493c-93d2-a776141dba44-kolla-config\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.109714 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3b075a1-3f92-493c-93d2-a776141dba44-config-data\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.137219 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b075a1-3f92-493c-93d2-a776141dba44-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.144590 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.154109 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b075a1-3f92-493c-93d2-a776141dba44-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.164188 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2nrl\" (UniqueName: \"kubernetes.io/projected/e3b075a1-3f92-493c-93d2-a776141dba44-kube-api-access-p2nrl\") pod \"memcached-0\" (UID: \"e3b075a1-3f92-493c-93d2-a776141dba44\") " pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.250233 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.686772 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.895920 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9841a1c2-83f5-475b-8180-b1e9cd13467b","Type":"ContainerStarted","Data":"129fd5d932e59a71cbd0d4ac11603abec4c1f7b02bef478658239a9b7f41b7f2"} Dec 04 06:26:50 crc kubenswrapper[4832]: I1204 06:26:50.900957 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"22fcd5ed-0004-4329-b8c6-7855939765dc","Type":"ContainerStarted","Data":"07af2ee41446091f4c8373327a13124b08a0a8c4656118fa5a4afd883742f9a2"} Dec 04 06:26:51 crc kubenswrapper[4832]: I1204 06:26:51.178987 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 04 06:26:51 crc kubenswrapper[4832]: W1204 06:26:51.217161 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3b075a1_3f92_493c_93d2_a776141dba44.slice/crio-411a15b6e536eb10bd2e07e40dbf437ea91ea0bca777a01bb128b10b45f76a77 WatchSource:0}: Error finding container 411a15b6e536eb10bd2e07e40dbf437ea91ea0bca777a01bb128b10b45f76a77: Status 404 returned error can't find the container with id 411a15b6e536eb10bd2e07e40dbf437ea91ea0bca777a01bb128b10b45f76a77 Dec 04 06:26:51 crc kubenswrapper[4832]: I1204 06:26:51.614927 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 06:26:51 crc kubenswrapper[4832]: I1204 06:26:51.615962 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 04 06:26:51 crc kubenswrapper[4832]: I1204 06:26:51.622686 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vpzrn" Dec 04 06:26:51 crc kubenswrapper[4832]: I1204 06:26:51.636223 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 06:26:51 crc kubenswrapper[4832]: I1204 06:26:51.804460 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp469\" (UniqueName: \"kubernetes.io/projected/60daac54-910d-4a74-8a05-ab520ea21cab-kube-api-access-zp469\") pod \"kube-state-metrics-0\" (UID: \"60daac54-910d-4a74-8a05-ab520ea21cab\") " pod="openstack/kube-state-metrics-0" Dec 04 06:26:51 crc kubenswrapper[4832]: I1204 06:26:51.906925 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp469\" (UniqueName: \"kubernetes.io/projected/60daac54-910d-4a74-8a05-ab520ea21cab-kube-api-access-zp469\") pod \"kube-state-metrics-0\" (UID: \"60daac54-910d-4a74-8a05-ab520ea21cab\") " pod="openstack/kube-state-metrics-0" Dec 04 06:26:51 crc kubenswrapper[4832]: I1204 06:26:51.931938 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e3b075a1-3f92-493c-93d2-a776141dba44","Type":"ContainerStarted","Data":"411a15b6e536eb10bd2e07e40dbf437ea91ea0bca777a01bb128b10b45f76a77"} Dec 04 06:26:51 crc kubenswrapper[4832]: I1204 06:26:51.954054 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp469\" (UniqueName: \"kubernetes.io/projected/60daac54-910d-4a74-8a05-ab520ea21cab-kube-api-access-zp469\") pod \"kube-state-metrics-0\" (UID: \"60daac54-910d-4a74-8a05-ab520ea21cab\") " pod="openstack/kube-state-metrics-0" Dec 04 06:26:51 crc kubenswrapper[4832]: I1204 06:26:51.996202 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 04 06:26:54 crc kubenswrapper[4832]: I1204 06:26:54.031977 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.058924 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"60daac54-910d-4a74-8a05-ab520ea21cab","Type":"ContainerStarted","Data":"c20613d91f7178771e1a2ef1cd50a324012b2435eb66b4566fed941b5365fd05"} Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.162816 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kcxl8"] Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.164002 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.171049 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.171382 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-g5zdk" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.171515 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.174413 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kcxl8"] Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.208077 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-m96v7"] Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.211073 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.224568 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-m96v7"] Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.266208 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-var-log-ovn\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.266276 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-var-run\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.266297 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-var-log\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.266315 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-ovn-controller-tls-certs\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.266342 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-var-run\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.266370 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-scripts\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.266410 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-scripts\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.266428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2qwl\" (UniqueName: \"kubernetes.io/projected/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-kube-api-access-f2qwl\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.266456 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-etc-ovs\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.266476 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-var-lib\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.266513 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-var-run-ovn\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.267185 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-combined-ca-bundle\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.267231 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgl9d\" (UniqueName: \"kubernetes.io/projected/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-kube-api-access-cgl9d\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369281 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-etc-ovs\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369324 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-var-lib\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369360 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-var-run-ovn\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369414 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-combined-ca-bundle\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369448 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgl9d\" (UniqueName: \"kubernetes.io/projected/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-kube-api-access-cgl9d\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369474 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-var-log-ovn\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369501 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-var-run\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369520 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-var-log\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369535 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-ovn-controller-tls-certs\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369554 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-var-run\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369575 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-scripts\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369597 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-scripts\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369616 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2qwl\" (UniqueName: \"kubernetes.io/projected/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-kube-api-access-f2qwl\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.369993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-etc-ovs\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.370119 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-var-lib\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.370242 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-var-run-ovn\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.371423 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-var-log\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.372037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-var-run\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.376594 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-var-log-ovn\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.376806 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-var-run\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.379327 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-scripts\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.380612 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-scripts\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.384709 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-combined-ca-bundle\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.386983 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2qwl\" (UniqueName: \"kubernetes.io/projected/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-kube-api-access-f2qwl\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.391654 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6de6fb2f-c87b-41af-8e93-05d7da0fad2a-ovn-controller-tls-certs\") pod \"ovn-controller-kcxl8\" (UID: \"6de6fb2f-c87b-41af-8e93-05d7da0fad2a\") " pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.402190 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgl9d\" (UniqueName: \"kubernetes.io/projected/f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf-kube-api-access-cgl9d\") pod \"ovn-controller-ovs-m96v7\" (UID: \"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf\") " pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.504700 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kcxl8" Dec 04 06:26:55 crc kubenswrapper[4832]: I1204 06:26:55.543141 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.159078 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.173536 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.174654 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.178913 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.178965 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-kgcc6" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.178913 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.179231 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.181004 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.297413 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.297467 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f89488d-b176-4bf6-9172-ed2fc6492019-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.297521 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f89488d-b176-4bf6-9172-ed2fc6492019-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.297546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlczf\" (UniqueName: \"kubernetes.io/projected/8f89488d-b176-4bf6-9172-ed2fc6492019-kube-api-access-qlczf\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.297630 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f89488d-b176-4bf6-9172-ed2fc6492019-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.297665 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f89488d-b176-4bf6-9172-ed2fc6492019-config\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.297684 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f89488d-b176-4bf6-9172-ed2fc6492019-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.297700 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8f89488d-b176-4bf6-9172-ed2fc6492019-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.398936 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f89488d-b176-4bf6-9172-ed2fc6492019-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.398996 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f89488d-b176-4bf6-9172-ed2fc6492019-config\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.399023 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f89488d-b176-4bf6-9172-ed2fc6492019-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.399042 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8f89488d-b176-4bf6-9172-ed2fc6492019-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.399080 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.399097 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f89488d-b176-4bf6-9172-ed2fc6492019-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.399138 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f89488d-b176-4bf6-9172-ed2fc6492019-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.399162 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlczf\" (UniqueName: \"kubernetes.io/projected/8f89488d-b176-4bf6-9172-ed2fc6492019-kube-api-access-qlczf\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.399569 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8f89488d-b176-4bf6-9172-ed2fc6492019-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.399893 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.400421 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f89488d-b176-4bf6-9172-ed2fc6492019-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.402091 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f89488d-b176-4bf6-9172-ed2fc6492019-config\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.408472 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f89488d-b176-4bf6-9172-ed2fc6492019-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.408485 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f89488d-b176-4bf6-9172-ed2fc6492019-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.409099 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f89488d-b176-4bf6-9172-ed2fc6492019-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.419915 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlczf\" (UniqueName: \"kubernetes.io/projected/8f89488d-b176-4bf6-9172-ed2fc6492019-kube-api-access-qlczf\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.423269 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8f89488d-b176-4bf6-9172-ed2fc6492019\") " pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:56 crc kubenswrapper[4832]: I1204 06:26:56.510129 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 04 06:26:58 crc kubenswrapper[4832]: I1204 06:26:58.866079 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 04 06:26:58 crc kubenswrapper[4832]: I1204 06:26:58.868383 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:58 crc kubenswrapper[4832]: I1204 06:26:58.870258 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 04 06:26:58 crc kubenswrapper[4832]: I1204 06:26:58.872634 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 04 06:26:58 crc kubenswrapper[4832]: I1204 06:26:58.872809 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-lzrjg" Dec 04 06:26:58 crc kubenswrapper[4832]: I1204 06:26:58.872940 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 04 06:26:58 crc kubenswrapper[4832]: I1204 06:26:58.916480 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.003416 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a470eda9-a394-4ecb-a723-404f00bbd45a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.003486 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a470eda9-a394-4ecb-a723-404f00bbd45a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.003515 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a470eda9-a394-4ecb-a723-404f00bbd45a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.003540 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a470eda9-a394-4ecb-a723-404f00bbd45a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.003615 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a470eda9-a394-4ecb-a723-404f00bbd45a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.003642 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.003661 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb9bq\" (UniqueName: \"kubernetes.io/projected/a470eda9-a394-4ecb-a723-404f00bbd45a-kube-api-access-tb9bq\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.003701 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a470eda9-a394-4ecb-a723-404f00bbd45a-config\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.105522 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a470eda9-a394-4ecb-a723-404f00bbd45a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.105869 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.106017 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb9bq\" (UniqueName: \"kubernetes.io/projected/a470eda9-a394-4ecb-a723-404f00bbd45a-kube-api-access-tb9bq\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.106144 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a470eda9-a394-4ecb-a723-404f00bbd45a-config\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.106234 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a470eda9-a394-4ecb-a723-404f00bbd45a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.106313 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a470eda9-a394-4ecb-a723-404f00bbd45a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.106453 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a470eda9-a394-4ecb-a723-404f00bbd45a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.106564 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a470eda9-a394-4ecb-a723-404f00bbd45a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.108255 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a470eda9-a394-4ecb-a723-404f00bbd45a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.108572 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a470eda9-a394-4ecb-a723-404f00bbd45a-config\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.126042 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a470eda9-a394-4ecb-a723-404f00bbd45a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.126679 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a470eda9-a394-4ecb-a723-404f00bbd45a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.134446 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb9bq\" (UniqueName: \"kubernetes.io/projected/a470eda9-a394-4ecb-a723-404f00bbd45a-kube-api-access-tb9bq\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.140730 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.151590 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a470eda9-a394-4ecb-a723-404f00bbd45a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.166266 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a470eda9-a394-4ecb-a723-404f00bbd45a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.207143 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a470eda9-a394-4ecb-a723-404f00bbd45a\") " pod="openstack/ovsdbserver-sb-0" Dec 04 06:26:59 crc kubenswrapper[4832]: I1204 06:26:59.487397 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 04 06:27:05 crc kubenswrapper[4832]: I1204 06:27:05.362725 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:27:05 crc kubenswrapper[4832]: I1204 06:27:05.363349 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:27:10 crc kubenswrapper[4832]: E1204 06:27:10.529231 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 04 06:27:10 crc kubenswrapper[4832]: E1204 06:27:10.529907 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6z56k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(22fcd5ed-0004-4329-b8c6-7855939765dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:27:10 crc kubenswrapper[4832]: E1204 06:27:10.531015 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="22fcd5ed-0004-4329-b8c6-7855939765dc" Dec 04 06:27:10 crc kubenswrapper[4832]: E1204 06:27:10.543371 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 04 06:27:10 crc kubenswrapper[4832]: E1204 06:27:10.543577 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfkdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(1d41c5c2-5373-423b-b14f-00c902111ee3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:27:10 crc kubenswrapper[4832]: E1204 06:27:10.545559 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="1d41c5c2-5373-423b-b14f-00c902111ee3" Dec 04 06:27:11 crc kubenswrapper[4832]: E1204 06:27:11.259351 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="1d41c5c2-5373-423b-b14f-00c902111ee3" Dec 04 06:27:11 crc kubenswrapper[4832]: E1204 06:27:11.259412 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="22fcd5ed-0004-4329-b8c6-7855939765dc" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.055242 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.055451 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmc6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(9841a1c2-83f5-475b-8180-b1e9cd13467b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.056628 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="9841a1c2-83f5-475b-8180-b1e9cd13467b" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.291640 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="9841a1c2-83f5-475b-8180-b1e9cd13467b" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.802850 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.803501 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbqt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-tsbps_openstack(ef95dc8a-7c26-4053-86e7-8e5436a60482): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.811556 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-tsbps" podUID="ef95dc8a-7c26-4053-86e7-8e5436a60482" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.811695 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.811856 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h82q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-wbxp6_openstack(a8063bac-d76a-41ad-8438-0a3bdb0f727d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.813028 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" podUID="a8063bac-d76a-41ad-8438-0a3bdb0f727d" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.837665 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.837856 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bx5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-gcn6q_openstack(ab377657-262b-4d67-8277-66bbc01db3dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.839779 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" podUID="ab377657-262b-4d67-8277-66bbc01db3dd" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.871326 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.871528 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6p5mk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-zxs7b_openstack(dc7db8e8-4255-4c40-8777-1961aeadb752): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:27:15 crc kubenswrapper[4832]: E1204 06:27:15.873571 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" podUID="dc7db8e8-4255-4c40-8777-1961aeadb752" Dec 04 06:27:16 crc kubenswrapper[4832]: E1204 06:27:16.299957 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-tsbps" podUID="ef95dc8a-7c26-4053-86e7-8e5436a60482" Dec 04 06:27:16 crc kubenswrapper[4832]: E1204 06:27:16.302782 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" podUID="ab377657-262b-4d67-8277-66bbc01db3dd" Dec 04 06:27:16 crc kubenswrapper[4832]: I1204 06:27:16.466178 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kcxl8"] Dec 04 06:27:16 crc kubenswrapper[4832]: I1204 06:27:16.729174 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 04 06:27:16 crc kubenswrapper[4832]: I1204 06:27:16.831212 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-m96v7"] Dec 04 06:27:16 crc kubenswrapper[4832]: W1204 06:27:16.893611 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f89488d_b176_4bf6_9172_ed2fc6492019.slice/crio-453049f14949c79d54b148aceef0d86f5fc4ac32c484196cff9f4f5e24fc3a1c WatchSource:0}: Error finding container 453049f14949c79d54b148aceef0d86f5fc4ac32c484196cff9f4f5e24fc3a1c: Status 404 returned error can't find the container with id 453049f14949c79d54b148aceef0d86f5fc4ac32c484196cff9f4f5e24fc3a1c Dec 04 06:27:16 crc kubenswrapper[4832]: I1204 06:27:16.974667 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" Dec 04 06:27:16 crc kubenswrapper[4832]: I1204 06:27:16.982815 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:27:16 crc kubenswrapper[4832]: I1204 06:27:16.995310 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8063bac-d76a-41ad-8438-0a3bdb0f727d-config\") pod \"a8063bac-d76a-41ad-8438-0a3bdb0f727d\" (UID: \"a8063bac-d76a-41ad-8438-0a3bdb0f727d\") " Dec 04 06:27:16 crc kubenswrapper[4832]: I1204 06:27:16.995370 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h82q6\" (UniqueName: \"kubernetes.io/projected/a8063bac-d76a-41ad-8438-0a3bdb0f727d-kube-api-access-h82q6\") pod \"a8063bac-d76a-41ad-8438-0a3bdb0f727d\" (UID: \"a8063bac-d76a-41ad-8438-0a3bdb0f727d\") " Dec 04 06:27:16 crc kubenswrapper[4832]: I1204 06:27:16.995850 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8063bac-d76a-41ad-8438-0a3bdb0f727d-config" (OuterVolumeSpecName: "config") pod "a8063bac-d76a-41ad-8438-0a3bdb0f727d" (UID: "a8063bac-d76a-41ad-8438-0a3bdb0f727d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:16 crc kubenswrapper[4832]: I1204 06:27:16.996299 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8063bac-d76a-41ad-8438-0a3bdb0f727d-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.001527 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8063bac-d76a-41ad-8438-0a3bdb0f727d-kube-api-access-h82q6" (OuterVolumeSpecName: "kube-api-access-h82q6") pod "a8063bac-d76a-41ad-8438-0a3bdb0f727d" (UID: "a8063bac-d76a-41ad-8438-0a3bdb0f727d"). InnerVolumeSpecName "kube-api-access-h82q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.097208 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p5mk\" (UniqueName: \"kubernetes.io/projected/dc7db8e8-4255-4c40-8777-1961aeadb752-kube-api-access-6p5mk\") pod \"dc7db8e8-4255-4c40-8777-1961aeadb752\" (UID: \"dc7db8e8-4255-4c40-8777-1961aeadb752\") " Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.097483 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc7db8e8-4255-4c40-8777-1961aeadb752-dns-svc\") pod \"dc7db8e8-4255-4c40-8777-1961aeadb752\" (UID: \"dc7db8e8-4255-4c40-8777-1961aeadb752\") " Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.097521 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc7db8e8-4255-4c40-8777-1961aeadb752-config\") pod \"dc7db8e8-4255-4c40-8777-1961aeadb752\" (UID: \"dc7db8e8-4255-4c40-8777-1961aeadb752\") " Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.097861 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h82q6\" (UniqueName: \"kubernetes.io/projected/a8063bac-d76a-41ad-8438-0a3bdb0f727d-kube-api-access-h82q6\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.098530 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc7db8e8-4255-4c40-8777-1961aeadb752-config" (OuterVolumeSpecName: "config") pod "dc7db8e8-4255-4c40-8777-1961aeadb752" (UID: "dc7db8e8-4255-4c40-8777-1961aeadb752"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.098526 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc7db8e8-4255-4c40-8777-1961aeadb752-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc7db8e8-4255-4c40-8777-1961aeadb752" (UID: "dc7db8e8-4255-4c40-8777-1961aeadb752"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.101933 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7db8e8-4255-4c40-8777-1961aeadb752-kube-api-access-6p5mk" (OuterVolumeSpecName: "kube-api-access-6p5mk") pod "dc7db8e8-4255-4c40-8777-1961aeadb752" (UID: "dc7db8e8-4255-4c40-8777-1961aeadb752"). InnerVolumeSpecName "kube-api-access-6p5mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.199047 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p5mk\" (UniqueName: \"kubernetes.io/projected/dc7db8e8-4255-4c40-8777-1961aeadb752-kube-api-access-6p5mk\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.199081 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc7db8e8-4255-4c40-8777-1961aeadb752-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.199091 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc7db8e8-4255-4c40-8777-1961aeadb752-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.305698 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.305695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wbxp6" event={"ID":"a8063bac-d76a-41ad-8438-0a3bdb0f727d","Type":"ContainerDied","Data":"678b4444fa8f9df39bf6f754cf6b0ed62aa702e1d468d446de04f81e6159d302"} Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.306573 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8f89488d-b176-4bf6-9172-ed2fc6492019","Type":"ContainerStarted","Data":"453049f14949c79d54b148aceef0d86f5fc4ac32c484196cff9f4f5e24fc3a1c"} Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.307994 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kcxl8" event={"ID":"6de6fb2f-c87b-41af-8e93-05d7da0fad2a","Type":"ContainerStarted","Data":"a6360772c6a29ff0ed42fc57df720e90b1e19b2503ebcbc66e7a2fb8a91de420"} Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.309382 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m96v7" event={"ID":"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf","Type":"ContainerStarted","Data":"04b1cbb12529275d58e521115eebbc3589c6671d6cf85a7b10369e692a1df02e"} Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.311482 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e3b075a1-3f92-493c-93d2-a776141dba44","Type":"ContainerStarted","Data":"402133784becd8de3a259ad106e2dfdfaf3faf81d9cffd5a5a28cb16749bf887"} Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.311698 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.313378 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" event={"ID":"dc7db8e8-4255-4c40-8777-1961aeadb752","Type":"ContainerDied","Data":"4b517b1de7a58c19e505466209a076b1e5111eb39ab07ecbce50ca3457614597"} Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.313447 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zxs7b" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.347258 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.844464395 podStartE2EDuration="28.347237825s" podCreationTimestamp="2025-12-04 06:26:49 +0000 UTC" firstStartedPulling="2025-12-04 06:26:51.252729599 +0000 UTC m=+1066.865547305" lastFinishedPulling="2025-12-04 06:27:15.755503029 +0000 UTC m=+1091.368320735" observedRunningTime="2025-12-04 06:27:17.341838069 +0000 UTC m=+1092.954655775" watchObservedRunningTime="2025-12-04 06:27:17.347237825 +0000 UTC m=+1092.960055531" Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.389902 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zxs7b"] Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.400651 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zxs7b"] Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.420720 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wbxp6"] Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.428542 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wbxp6"] Dec 04 06:27:17 crc kubenswrapper[4832]: I1204 06:27:17.468069 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 04 06:27:17 crc kubenswrapper[4832]: W1204 06:27:17.474716 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda470eda9_a394_4ecb_a723_404f00bbd45a.slice/crio-e6d5a88e695ba122f50244960b4a71dad13bcf25f332971f6ffe4a40856f440a WatchSource:0}: Error finding container e6d5a88e695ba122f50244960b4a71dad13bcf25f332971f6ffe4a40856f440a: Status 404 returned error can't find the container with id e6d5a88e695ba122f50244960b4a71dad13bcf25f332971f6ffe4a40856f440a Dec 04 06:27:18 crc kubenswrapper[4832]: I1204 06:27:18.323498 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a470eda9-a394-4ecb-a723-404f00bbd45a","Type":"ContainerStarted","Data":"e6d5a88e695ba122f50244960b4a71dad13bcf25f332971f6ffe4a40856f440a"} Dec 04 06:27:18 crc kubenswrapper[4832]: I1204 06:27:18.325285 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3","Type":"ContainerStarted","Data":"50c4b8bc59996799ec0754cae7a6f82efda61b06d40f90fa21ecb25db8188717"} Dec 04 06:27:18 crc kubenswrapper[4832]: I1204 06:27:18.722771 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8063bac-d76a-41ad-8438-0a3bdb0f727d" path="/var/lib/kubelet/pods/a8063bac-d76a-41ad-8438-0a3bdb0f727d/volumes" Dec 04 06:27:18 crc kubenswrapper[4832]: I1204 06:27:18.723256 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7db8e8-4255-4c40-8777-1961aeadb752" path="/var/lib/kubelet/pods/dc7db8e8-4255-4c40-8777-1961aeadb752/volumes" Dec 04 06:27:21 crc kubenswrapper[4832]: I1204 06:27:21.352271 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kcxl8" event={"ID":"6de6fb2f-c87b-41af-8e93-05d7da0fad2a","Type":"ContainerStarted","Data":"aeb53b6f929e7f0ef1210e4c89e3ddd14e65c7f5a3af8eb905ca24471bb763c6"} Dec 04 06:27:21 crc kubenswrapper[4832]: I1204 06:27:21.353552 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-kcxl8" Dec 04 06:27:21 crc kubenswrapper[4832]: I1204 06:27:21.357925 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m96v7" event={"ID":"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf","Type":"ContainerStarted","Data":"be547ca931108155f44fc16b3adf2398b24344405ef136a4aabc0b8f5cc1f369"} Dec 04 06:27:21 crc kubenswrapper[4832]: I1204 06:27:21.360912 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a470eda9-a394-4ecb-a723-404f00bbd45a","Type":"ContainerStarted","Data":"1a05524519273c1ed773d3ce5f5668b83d44374461a9c2938672fff6f28cb077"} Dec 04 06:27:21 crc kubenswrapper[4832]: I1204 06:27:21.363861 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8f89488d-b176-4bf6-9172-ed2fc6492019","Type":"ContainerStarted","Data":"3d010a031058405924799770f181d18a2b279fbff7582ef78d2d0cccfea9e4f9"} Dec 04 06:27:21 crc kubenswrapper[4832]: I1204 06:27:21.369762 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"60daac54-910d-4a74-8a05-ab520ea21cab","Type":"ContainerStarted","Data":"1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284"} Dec 04 06:27:21 crc kubenswrapper[4832]: I1204 06:27:21.370115 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 04 06:27:21 crc kubenswrapper[4832]: I1204 06:27:21.386239 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kcxl8" podStartSLOduration=22.465151112 podStartE2EDuration="26.386209213s" podCreationTimestamp="2025-12-04 06:26:55 +0000 UTC" firstStartedPulling="2025-12-04 06:27:16.83720802 +0000 UTC m=+1092.450025716" lastFinishedPulling="2025-12-04 06:27:20.758266111 +0000 UTC m=+1096.371083817" observedRunningTime="2025-12-04 06:27:21.381138475 +0000 UTC m=+1096.993956181" watchObservedRunningTime="2025-12-04 06:27:21.386209213 +0000 UTC m=+1096.999026919" Dec 04 06:27:21 crc kubenswrapper[4832]: I1204 06:27:21.400520 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.756653703 podStartE2EDuration="30.400497473s" podCreationTimestamp="2025-12-04 06:26:51 +0000 UTC" firstStartedPulling="2025-12-04 06:26:54.113438757 +0000 UTC m=+1069.726256453" lastFinishedPulling="2025-12-04 06:27:20.757282517 +0000 UTC m=+1096.370100223" observedRunningTime="2025-12-04 06:27:21.39920373 +0000 UTC m=+1097.012021426" watchObservedRunningTime="2025-12-04 06:27:21.400497473 +0000 UTC m=+1097.013315179" Dec 04 06:27:22 crc kubenswrapper[4832]: I1204 06:27:22.379004 4832 generic.go:334] "Generic (PLEG): container finished" podID="f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf" containerID="be547ca931108155f44fc16b3adf2398b24344405ef136a4aabc0b8f5cc1f369" exitCode=0 Dec 04 06:27:22 crc kubenswrapper[4832]: I1204 06:27:22.379070 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m96v7" event={"ID":"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf","Type":"ContainerDied","Data":"be547ca931108155f44fc16b3adf2398b24344405ef136a4aabc0b8f5cc1f369"} Dec 04 06:27:24 crc kubenswrapper[4832]: I1204 06:27:24.404019 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a470eda9-a394-4ecb-a723-404f00bbd45a","Type":"ContainerStarted","Data":"d6aa4552cc62b84f703b916e3f27a8c635ac276431b7dddb3603630a8b565217"} Dec 04 06:27:24 crc kubenswrapper[4832]: I1204 06:27:24.409555 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8f89488d-b176-4bf6-9172-ed2fc6492019","Type":"ContainerStarted","Data":"9d9b77fb9a60ed2b32f0e1e820cd1434c01f4fa5532df581008360d419a2e0d5"} Dec 04 06:27:24 crc kubenswrapper[4832]: I1204 06:27:24.413088 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m96v7" event={"ID":"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf","Type":"ContainerStarted","Data":"bd5348daf54bb5b6ddfab0dda728a03b232667066c127b614e288a9923222158"} Dec 04 06:27:24 crc kubenswrapper[4832]: I1204 06:27:24.413136 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m96v7" event={"ID":"f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf","Type":"ContainerStarted","Data":"17f3b8073ee00a0f6af7a5f100b1d416670e28f5f41b3b42d75d19b68fa2efa2"} Dec 04 06:27:24 crc kubenswrapper[4832]: I1204 06:27:24.413261 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:27:24 crc kubenswrapper[4832]: I1204 06:27:24.431684 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=21.059824606 podStartE2EDuration="27.43166389s" podCreationTimestamp="2025-12-04 06:26:57 +0000 UTC" firstStartedPulling="2025-12-04 06:27:17.478152699 +0000 UTC m=+1093.090970405" lastFinishedPulling="2025-12-04 06:27:23.849991983 +0000 UTC m=+1099.462809689" observedRunningTime="2025-12-04 06:27:24.428513391 +0000 UTC m=+1100.041331097" watchObservedRunningTime="2025-12-04 06:27:24.43166389 +0000 UTC m=+1100.044481596" Dec 04 06:27:24 crc kubenswrapper[4832]: I1204 06:27:24.447096 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.477073842 podStartE2EDuration="29.447077548s" podCreationTimestamp="2025-12-04 06:26:55 +0000 UTC" firstStartedPulling="2025-12-04 06:27:16.895725643 +0000 UTC m=+1092.508543349" lastFinishedPulling="2025-12-04 06:27:23.865729359 +0000 UTC m=+1099.478547055" observedRunningTime="2025-12-04 06:27:24.443461687 +0000 UTC m=+1100.056279423" watchObservedRunningTime="2025-12-04 06:27:24.447077548 +0000 UTC m=+1100.059895264" Dec 04 06:27:24 crc kubenswrapper[4832]: I1204 06:27:24.470917 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-m96v7" podStartSLOduration=25.680858283 podStartE2EDuration="29.470889087s" podCreationTimestamp="2025-12-04 06:26:55 +0000 UTC" firstStartedPulling="2025-12-04 06:27:16.918567788 +0000 UTC m=+1092.531385494" lastFinishedPulling="2025-12-04 06:27:20.708598602 +0000 UTC m=+1096.321416298" observedRunningTime="2025-12-04 06:27:24.467287586 +0000 UTC m=+1100.080105302" watchObservedRunningTime="2025-12-04 06:27:24.470889087 +0000 UTC m=+1100.083706793" Dec 04 06:27:24 crc kubenswrapper[4832]: I1204 06:27:24.488235 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 04 06:27:25 crc kubenswrapper[4832]: I1204 06:27:25.252113 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 04 06:27:25 crc kubenswrapper[4832]: I1204 06:27:25.422197 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1d41c5c2-5373-423b-b14f-00c902111ee3","Type":"ContainerStarted","Data":"acd88f39d1f1df79a40de06ac24a1db1091f12f9d42d95a1aefade8dd94e4663"} Dec 04 06:27:25 crc kubenswrapper[4832]: I1204 06:27:25.423481 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:27:26 crc kubenswrapper[4832]: I1204 06:27:26.436579 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"22fcd5ed-0004-4329-b8c6-7855939765dc","Type":"ContainerStarted","Data":"43564ec8a08d4e71f6deb7713717611a904bc5462dc403e3ce09fe66b16ec5ad"} Dec 04 06:27:26 crc kubenswrapper[4832]: I1204 06:27:26.488009 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 04 06:27:26 crc kubenswrapper[4832]: I1204 06:27:26.510288 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 04 06:27:26 crc kubenswrapper[4832]: I1204 06:27:26.510332 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 04 06:27:26 crc kubenswrapper[4832]: I1204 06:27:26.528499 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 04 06:27:26 crc kubenswrapper[4832]: I1204 06:27:26.551650 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.483507 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.488251 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.754473 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tsbps"] Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.800755 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8kfws"] Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.802791 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.810433 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.910428 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8kfws"] Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.942405 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gcn6q"] Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.951304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8kfws\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.951423 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-config\") pod \"dnsmasq-dns-7fd796d7df-8kfws\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.951457 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz6fp\" (UniqueName: \"kubernetes.io/projected/197efe52-c4f9-4868-91c6-eabfa853cc44-kube-api-access-mz6fp\") pod \"dnsmasq-dns-7fd796d7df-8kfws\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.951489 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8kfws\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.951557 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.957341 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.964820 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.965166 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.965204 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 04 06:27:27 crc kubenswrapper[4832]: I1204 06:27:27.975105 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8p5xx" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.008452 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.021456 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xv2xq"] Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.022651 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.027456 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cz855"] Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.028997 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.032669 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.032982 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.054605 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-config\") pod \"dnsmasq-dns-7fd796d7df-8kfws\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.054939 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz6fp\" (UniqueName: \"kubernetes.io/projected/197efe52-c4f9-4868-91c6-eabfa853cc44-kube-api-access-mz6fp\") pod \"dnsmasq-dns-7fd796d7df-8kfws\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.054974 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8kfws\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.055011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8kfws\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.056208 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8kfws\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.056741 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-config\") pod \"dnsmasq-dns-7fd796d7df-8kfws\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.058448 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8kfws\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.081796 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz6fp\" (UniqueName: \"kubernetes.io/projected/197efe52-c4f9-4868-91c6-eabfa853cc44-kube-api-access-mz6fp\") pod \"dnsmasq-dns-7fd796d7df-8kfws\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.090565 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cz855"] Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.103860 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xv2xq"] Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.156708 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc55e7c7-bd80-4440-922d-d0711af4b912-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.156759 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txmrx\" (UniqueName: \"kubernetes.io/projected/9b61be98-5007-43c6-b717-dae011be5830-kube-api-access-txmrx\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.156787 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b61be98-5007-43c6-b717-dae011be5830-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.156810 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.156843 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bc55e7c7-bd80-4440-922d-d0711af4b912-ovs-rundir\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.156864 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.156894 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b61be98-5007-43c6-b717-dae011be5830-config\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.156912 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b61be98-5007-43c6-b717-dae011be5830-scripts\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.156936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-config\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.156985 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bc55e7c7-bd80-4440-922d-d0711af4b912-ovn-rundir\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.157000 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9b61be98-5007-43c6-b717-dae011be5830-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.157024 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.157043 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mlbh\" (UniqueName: \"kubernetes.io/projected/d6903f61-c715-4356-9d86-b03a27561821-kube-api-access-5mlbh\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.157077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc55e7c7-bd80-4440-922d-d0711af4b912-combined-ca-bundle\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.157098 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b61be98-5007-43c6-b717-dae011be5830-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.157120 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmrz7\" (UniqueName: \"kubernetes.io/projected/bc55e7c7-bd80-4440-922d-d0711af4b912-kube-api-access-rmrz7\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.157143 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b61be98-5007-43c6-b717-dae011be5830-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.157165 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc55e7c7-bd80-4440-922d-d0711af4b912-config\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.177982 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.259786 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc55e7c7-bd80-4440-922d-d0711af4b912-combined-ca-bundle\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.259838 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b61be98-5007-43c6-b717-dae011be5830-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.259877 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b61be98-5007-43c6-b717-dae011be5830-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.259901 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmrz7\" (UniqueName: \"kubernetes.io/projected/bc55e7c7-bd80-4440-922d-d0711af4b912-kube-api-access-rmrz7\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.259935 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc55e7c7-bd80-4440-922d-d0711af4b912-config\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.259962 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc55e7c7-bd80-4440-922d-d0711af4b912-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.259986 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txmrx\" (UniqueName: \"kubernetes.io/projected/9b61be98-5007-43c6-b717-dae011be5830-kube-api-access-txmrx\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.260023 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b61be98-5007-43c6-b717-dae011be5830-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.260051 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.260096 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bc55e7c7-bd80-4440-922d-d0711af4b912-ovs-rundir\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.260127 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.260169 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b61be98-5007-43c6-b717-dae011be5830-config\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.260200 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b61be98-5007-43c6-b717-dae011be5830-scripts\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.260234 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-config\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.260269 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bc55e7c7-bd80-4440-922d-d0711af4b912-ovn-rundir\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.260291 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9b61be98-5007-43c6-b717-dae011be5830-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.260316 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.260338 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mlbh\" (UniqueName: \"kubernetes.io/projected/d6903f61-c715-4356-9d86-b03a27561821-kube-api-access-5mlbh\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.262252 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc55e7c7-bd80-4440-922d-d0711af4b912-config\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.263924 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b61be98-5007-43c6-b717-dae011be5830-config\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.264737 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bc55e7c7-bd80-4440-922d-d0711af4b912-ovn-rundir\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.265217 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bc55e7c7-bd80-4440-922d-d0711af4b912-ovs-rundir\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.266251 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.267007 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-config\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.267259 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b61be98-5007-43c6-b717-dae011be5830-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.267302 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.267402 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b61be98-5007-43c6-b717-dae011be5830-scripts\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.267668 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9b61be98-5007-43c6-b717-dae011be5830-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.267835 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.268252 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc55e7c7-bd80-4440-922d-d0711af4b912-combined-ca-bundle\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.282782 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b61be98-5007-43c6-b717-dae011be5830-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.284790 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b61be98-5007-43c6-b717-dae011be5830-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.293148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc55e7c7-bd80-4440-922d-d0711af4b912-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.294518 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txmrx\" (UniqueName: \"kubernetes.io/projected/9b61be98-5007-43c6-b717-dae011be5830-kube-api-access-txmrx\") pod \"ovn-northd-0\" (UID: \"9b61be98-5007-43c6-b717-dae011be5830\") " pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.301166 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.304853 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mlbh\" (UniqueName: \"kubernetes.io/projected/d6903f61-c715-4356-9d86-b03a27561821-kube-api-access-5mlbh\") pod \"dnsmasq-dns-86db49b7ff-cz855\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.324202 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmrz7\" (UniqueName: \"kubernetes.io/projected/bc55e7c7-bd80-4440-922d-d0711af4b912-kube-api-access-rmrz7\") pod \"ovn-controller-metrics-xv2xq\" (UID: \"bc55e7c7-bd80-4440-922d-d0711af4b912\") " pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.352934 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xv2xq" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.365269 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.365796 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.384174 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.468190 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-tsbps" event={"ID":"ef95dc8a-7c26-4053-86e7-8e5436a60482","Type":"ContainerDied","Data":"432ca473398559dedca75b166de0c18d050124a08786b6f53b63deec267fa637"} Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.468340 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-tsbps" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.474323 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.474558 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gcn6q" event={"ID":"ab377657-262b-4d67-8277-66bbc01db3dd","Type":"ContainerDied","Data":"bd21c35ea4b7644e336a399adcc83de1de0584dba4f0b25d2cb75d4965a742c6"} Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.566048 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef95dc8a-7c26-4053-86e7-8e5436a60482-config\") pod \"ef95dc8a-7c26-4053-86e7-8e5436a60482\" (UID: \"ef95dc8a-7c26-4053-86e7-8e5436a60482\") " Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.566090 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbqt7\" (UniqueName: \"kubernetes.io/projected/ef95dc8a-7c26-4053-86e7-8e5436a60482-kube-api-access-rbqt7\") pod \"ef95dc8a-7c26-4053-86e7-8e5436a60482\" (UID: \"ef95dc8a-7c26-4053-86e7-8e5436a60482\") " Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.566194 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab377657-262b-4d67-8277-66bbc01db3dd-config\") pod \"ab377657-262b-4d67-8277-66bbc01db3dd\" (UID: \"ab377657-262b-4d67-8277-66bbc01db3dd\") " Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.566279 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bx5t\" (UniqueName: \"kubernetes.io/projected/ab377657-262b-4d67-8277-66bbc01db3dd-kube-api-access-4bx5t\") pod \"ab377657-262b-4d67-8277-66bbc01db3dd\" (UID: \"ab377657-262b-4d67-8277-66bbc01db3dd\") " Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.566317 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef95dc8a-7c26-4053-86e7-8e5436a60482-dns-svc\") pod \"ef95dc8a-7c26-4053-86e7-8e5436a60482\" (UID: \"ef95dc8a-7c26-4053-86e7-8e5436a60482\") " Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.566341 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab377657-262b-4d67-8277-66bbc01db3dd-dns-svc\") pod \"ab377657-262b-4d67-8277-66bbc01db3dd\" (UID: \"ab377657-262b-4d67-8277-66bbc01db3dd\") " Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.567326 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab377657-262b-4d67-8277-66bbc01db3dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab377657-262b-4d67-8277-66bbc01db3dd" (UID: "ab377657-262b-4d67-8277-66bbc01db3dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.567632 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab377657-262b-4d67-8277-66bbc01db3dd-config" (OuterVolumeSpecName: "config") pod "ab377657-262b-4d67-8277-66bbc01db3dd" (UID: "ab377657-262b-4d67-8277-66bbc01db3dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.567672 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef95dc8a-7c26-4053-86e7-8e5436a60482-config" (OuterVolumeSpecName: "config") pod "ef95dc8a-7c26-4053-86e7-8e5436a60482" (UID: "ef95dc8a-7c26-4053-86e7-8e5436a60482"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.567948 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef95dc8a-7c26-4053-86e7-8e5436a60482-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef95dc8a-7c26-4053-86e7-8e5436a60482" (UID: "ef95dc8a-7c26-4053-86e7-8e5436a60482"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.570336 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef95dc8a-7c26-4053-86e7-8e5436a60482-kube-api-access-rbqt7" (OuterVolumeSpecName: "kube-api-access-rbqt7") pod "ef95dc8a-7c26-4053-86e7-8e5436a60482" (UID: "ef95dc8a-7c26-4053-86e7-8e5436a60482"). InnerVolumeSpecName "kube-api-access-rbqt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.571745 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab377657-262b-4d67-8277-66bbc01db3dd-kube-api-access-4bx5t" (OuterVolumeSpecName: "kube-api-access-4bx5t") pod "ab377657-262b-4d67-8277-66bbc01db3dd" (UID: "ab377657-262b-4d67-8277-66bbc01db3dd"). InnerVolumeSpecName "kube-api-access-4bx5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.667034 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8kfws"] Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.668489 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab377657-262b-4d67-8277-66bbc01db3dd-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.668525 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bx5t\" (UniqueName: \"kubernetes.io/projected/ab377657-262b-4d67-8277-66bbc01db3dd-kube-api-access-4bx5t\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.668537 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef95dc8a-7c26-4053-86e7-8e5436a60482-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.668642 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab377657-262b-4d67-8277-66bbc01db3dd-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.669462 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef95dc8a-7c26-4053-86e7-8e5436a60482-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.669504 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbqt7\" (UniqueName: \"kubernetes.io/projected/ef95dc8a-7c26-4053-86e7-8e5436a60482-kube-api-access-rbqt7\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:28 crc kubenswrapper[4832]: W1204 06:27:28.677668 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod197efe52_c4f9_4868_91c6_eabfa853cc44.slice/crio-0e3a584579964bed661936c6fc4ff7a4b86c93606f962be46bfd06f30e9f25cd WatchSource:0}: Error finding container 0e3a584579964bed661936c6fc4ff7a4b86c93606f962be46bfd06f30e9f25cd: Status 404 returned error can't find the container with id 0e3a584579964bed661936c6fc4ff7a4b86c93606f962be46bfd06f30e9f25cd Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.822817 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.873759 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gcn6q"] Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.885255 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gcn6q"] Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.911291 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tsbps"] Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.921339 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tsbps"] Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.935722 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xv2xq"] Dec 04 06:27:28 crc kubenswrapper[4832]: I1204 06:27:28.951222 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cz855"] Dec 04 06:27:29 crc kubenswrapper[4832]: I1204 06:27:29.509661 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9841a1c2-83f5-475b-8180-b1e9cd13467b","Type":"ContainerStarted","Data":"b32b4ba3a85c18e070d3ac8869eb64fa26b94aab7235dfab58ddda32814c102d"} Dec 04 06:27:29 crc kubenswrapper[4832]: I1204 06:27:29.534647 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9b61be98-5007-43c6-b717-dae011be5830","Type":"ContainerStarted","Data":"c7baaebc83619453b49e83ee0aa78e71ecb8cd838bedbb2e06104796a2430cba"} Dec 04 06:27:29 crc kubenswrapper[4832]: I1204 06:27:29.543783 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" event={"ID":"d6903f61-c715-4356-9d86-b03a27561821","Type":"ContainerStarted","Data":"b1a972581887f046f6c8422d7b8f619cd1dfd2960b8f5568553307fb699e5f2d"} Dec 04 06:27:29 crc kubenswrapper[4832]: I1204 06:27:29.574037 4832 generic.go:334] "Generic (PLEG): container finished" podID="197efe52-c4f9-4868-91c6-eabfa853cc44" containerID="1411d3e7ce6ddef8df6e80a4f4dcf6425a4ec7ce0cf49f652fa41e8480d19268" exitCode=0 Dec 04 06:27:29 crc kubenswrapper[4832]: I1204 06:27:29.574105 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" event={"ID":"197efe52-c4f9-4868-91c6-eabfa853cc44","Type":"ContainerDied","Data":"1411d3e7ce6ddef8df6e80a4f4dcf6425a4ec7ce0cf49f652fa41e8480d19268"} Dec 04 06:27:29 crc kubenswrapper[4832]: I1204 06:27:29.574132 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" event={"ID":"197efe52-c4f9-4868-91c6-eabfa853cc44","Type":"ContainerStarted","Data":"0e3a584579964bed661936c6fc4ff7a4b86c93606f962be46bfd06f30e9f25cd"} Dec 04 06:27:29 crc kubenswrapper[4832]: I1204 06:27:29.609610 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xv2xq" event={"ID":"bc55e7c7-bd80-4440-922d-d0711af4b912","Type":"ContainerStarted","Data":"73b0a9153440df7372ad392c91a8f051aba2b923e25a751689f66a16c7f9095b"} Dec 04 06:27:29 crc kubenswrapper[4832]: I1204 06:27:29.609649 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xv2xq" event={"ID":"bc55e7c7-bd80-4440-922d-d0711af4b912","Type":"ContainerStarted","Data":"55d261d13af531948927103c338ca127f7a54b54c0d4bd21b536c00632760910"} Dec 04 06:27:29 crc kubenswrapper[4832]: I1204 06:27:29.636837 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xv2xq" podStartSLOduration=2.63681316 podStartE2EDuration="2.63681316s" podCreationTimestamp="2025-12-04 06:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:27:29.635856037 +0000 UTC m=+1105.248673763" watchObservedRunningTime="2025-12-04 06:27:29.63681316 +0000 UTC m=+1105.249630876" Dec 04 06:27:30 crc kubenswrapper[4832]: I1204 06:27:30.621791 4832 generic.go:334] "Generic (PLEG): container finished" podID="d6903f61-c715-4356-9d86-b03a27561821" containerID="5c52ade08a90363b8c24161a517fab17e92eda947e252425001c0651ab0ecbc0" exitCode=0 Dec 04 06:27:30 crc kubenswrapper[4832]: I1204 06:27:30.622099 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" event={"ID":"d6903f61-c715-4356-9d86-b03a27561821","Type":"ContainerDied","Data":"5c52ade08a90363b8c24161a517fab17e92eda947e252425001c0651ab0ecbc0"} Dec 04 06:27:30 crc kubenswrapper[4832]: I1204 06:27:30.627102 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" event={"ID":"197efe52-c4f9-4868-91c6-eabfa853cc44","Type":"ContainerStarted","Data":"97786754e73cceac12b9c1ea271d6b1d28010c63d0217749f14166828b651d65"} Dec 04 06:27:30 crc kubenswrapper[4832]: I1204 06:27:30.627249 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:30 crc kubenswrapper[4832]: I1204 06:27:30.757183 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab377657-262b-4d67-8277-66bbc01db3dd" path="/var/lib/kubelet/pods/ab377657-262b-4d67-8277-66bbc01db3dd/volumes" Dec 04 06:27:30 crc kubenswrapper[4832]: I1204 06:27:30.757660 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef95dc8a-7c26-4053-86e7-8e5436a60482" path="/var/lib/kubelet/pods/ef95dc8a-7c26-4053-86e7-8e5436a60482/volumes" Dec 04 06:27:30 crc kubenswrapper[4832]: I1204 06:27:30.777724 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" podStartSLOduration=3.34633609 podStartE2EDuration="3.777704269s" podCreationTimestamp="2025-12-04 06:27:27 +0000 UTC" firstStartedPulling="2025-12-04 06:27:28.679689817 +0000 UTC m=+1104.292507523" lastFinishedPulling="2025-12-04 06:27:29.111057986 +0000 UTC m=+1104.723875702" observedRunningTime="2025-12-04 06:27:30.77294942 +0000 UTC m=+1106.385767146" watchObservedRunningTime="2025-12-04 06:27:30.777704269 +0000 UTC m=+1106.390521975" Dec 04 06:27:31 crc kubenswrapper[4832]: I1204 06:27:31.636099 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9b61be98-5007-43c6-b717-dae011be5830","Type":"ContainerStarted","Data":"8f838d36b59f8cf5b90c12996d88434de28ee284819f5a65762c5ebcd790eb85"} Dec 04 06:27:31 crc kubenswrapper[4832]: I1204 06:27:31.636474 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 04 06:27:31 crc kubenswrapper[4832]: I1204 06:27:31.636486 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9b61be98-5007-43c6-b717-dae011be5830","Type":"ContainerStarted","Data":"94ded8e0f5fce832cf88df4abbaf309d5d987b4c080c422284a680fdba706216"} Dec 04 06:27:31 crc kubenswrapper[4832]: I1204 06:27:31.638221 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" event={"ID":"d6903f61-c715-4356-9d86-b03a27561821","Type":"ContainerStarted","Data":"b5f6c649c07bcf709ffb1a982ce4a7997e5ce2c13740d5fe8ae050fd40ec41cc"} Dec 04 06:27:31 crc kubenswrapper[4832]: I1204 06:27:31.638343 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:31 crc kubenswrapper[4832]: I1204 06:27:31.639500 4832 generic.go:334] "Generic (PLEG): container finished" podID="22fcd5ed-0004-4329-b8c6-7855939765dc" containerID="43564ec8a08d4e71f6deb7713717611a904bc5462dc403e3ce09fe66b16ec5ad" exitCode=0 Dec 04 06:27:31 crc kubenswrapper[4832]: I1204 06:27:31.639561 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"22fcd5ed-0004-4329-b8c6-7855939765dc","Type":"ContainerDied","Data":"43564ec8a08d4e71f6deb7713717611a904bc5462dc403e3ce09fe66b16ec5ad"} Dec 04 06:27:31 crc kubenswrapper[4832]: I1204 06:27:31.667496 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.866505066 podStartE2EDuration="4.667457705s" podCreationTimestamp="2025-12-04 06:27:27 +0000 UTC" firstStartedPulling="2025-12-04 06:27:28.818109575 +0000 UTC m=+1104.430927281" lastFinishedPulling="2025-12-04 06:27:30.619062224 +0000 UTC m=+1106.231879920" observedRunningTime="2025-12-04 06:27:31.662124191 +0000 UTC m=+1107.274941907" watchObservedRunningTime="2025-12-04 06:27:31.667457705 +0000 UTC m=+1107.280275411" Dec 04 06:27:31 crc kubenswrapper[4832]: I1204 06:27:31.684855 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" podStartSLOduration=4.225197882 podStartE2EDuration="4.68483478s" podCreationTimestamp="2025-12-04 06:27:27 +0000 UTC" firstStartedPulling="2025-12-04 06:27:28.952883992 +0000 UTC m=+1104.565701698" lastFinishedPulling="2025-12-04 06:27:29.41252089 +0000 UTC m=+1105.025338596" observedRunningTime="2025-12-04 06:27:31.678892451 +0000 UTC m=+1107.291710157" watchObservedRunningTime="2025-12-04 06:27:31.68483478 +0000 UTC m=+1107.297652486" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.000225 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.024321 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8kfws"] Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.059076 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-jqc8n"] Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.064823 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.075723 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jqc8n"] Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.143936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5hhl\" (UniqueName: \"kubernetes.io/projected/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-kube-api-access-t5hhl\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.144004 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.144029 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-dns-svc\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.144076 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-config\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.144099 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.245525 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5hhl\" (UniqueName: \"kubernetes.io/projected/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-kube-api-access-t5hhl\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.245944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.245975 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-dns-svc\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.246047 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-config\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.246071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.246925 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.247267 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-dns-svc\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.247506 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.247511 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-config\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.264305 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5hhl\" (UniqueName: \"kubernetes.io/projected/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-kube-api-access-t5hhl\") pod \"dnsmasq-dns-698758b865-jqc8n\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.388760 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.651850 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"22fcd5ed-0004-4329-b8c6-7855939765dc","Type":"ContainerStarted","Data":"b964229efd51252a06229600495a5b60500645fda03012a0316715e91396ab25"} Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.658132 4832 generic.go:334] "Generic (PLEG): container finished" podID="9841a1c2-83f5-475b-8180-b1e9cd13467b" containerID="b32b4ba3a85c18e070d3ac8869eb64fa26b94aab7235dfab58ddda32814c102d" exitCode=0 Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.658886 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9841a1c2-83f5-475b-8180-b1e9cd13467b","Type":"ContainerDied","Data":"b32b4ba3a85c18e070d3ac8869eb64fa26b94aab7235dfab58ddda32814c102d"} Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.659503 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" podUID="197efe52-c4f9-4868-91c6-eabfa853cc44" containerName="dnsmasq-dns" containerID="cri-o://97786754e73cceac12b9c1ea271d6b1d28010c63d0217749f14166828b651d65" gracePeriod=10 Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.692997 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.699023211 podStartE2EDuration="45.692978722s" podCreationTimestamp="2025-12-04 06:26:47 +0000 UTC" firstStartedPulling="2025-12-04 06:26:50.218676077 +0000 UTC m=+1065.831493783" lastFinishedPulling="2025-12-04 06:27:26.212631588 +0000 UTC m=+1101.825449294" observedRunningTime="2025-12-04 06:27:32.688992982 +0000 UTC m=+1108.301810688" watchObservedRunningTime="2025-12-04 06:27:32.692978722 +0000 UTC m=+1108.305796428" Dec 04 06:27:32 crc kubenswrapper[4832]: I1204 06:27:32.858709 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jqc8n"] Dec 04 06:27:32 crc kubenswrapper[4832]: W1204 06:27:32.866524 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cfd2b4e_14fc_406e_87e7_b7bcee62ea08.slice/crio-b909a16564ac6ba9e8ae273bacca9763dfdc108e4152c971ff982fb1ac98a5ff WatchSource:0}: Error finding container b909a16564ac6ba9e8ae273bacca9763dfdc108e4152c971ff982fb1ac98a5ff: Status 404 returned error can't find the container with id b909a16564ac6ba9e8ae273bacca9763dfdc108e4152c971ff982fb1ac98a5ff Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.169131 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.175030 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.177187 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.177468 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-r5m54" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.177800 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.178827 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.243508 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.373760 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.373823 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5889bafa-1999-43e3-846b-234db0db6e83-cache\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.373871 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.373913 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds6jc\" (UniqueName: \"kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-kube-api-access-ds6jc\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.373978 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5889bafa-1999-43e3-846b-234db0db6e83-lock\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.475077 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.475124 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5889bafa-1999-43e3-846b-234db0db6e83-cache\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.475152 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.475182 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds6jc\" (UniqueName: \"kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-kube-api-access-ds6jc\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.475225 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5889bafa-1999-43e3-846b-234db0db6e83-lock\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: E1204 06:27:33.475342 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 04 06:27:33 crc kubenswrapper[4832]: E1204 06:27:33.475359 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 04 06:27:33 crc kubenswrapper[4832]: E1204 06:27:33.475427 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift podName:5889bafa-1999-43e3-846b-234db0db6e83 nodeName:}" failed. No retries permitted until 2025-12-04 06:27:33.975406819 +0000 UTC m=+1109.588224525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift") pod "swift-storage-0" (UID: "5889bafa-1999-43e3-846b-234db0db6e83") : configmap "swift-ring-files" not found Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.475736 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5889bafa-1999-43e3-846b-234db0db6e83-lock\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.475793 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5889bafa-1999-43e3-846b-234db0db6e83-cache\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.475842 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.493559 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds6jc\" (UniqueName: \"kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-kube-api-access-ds6jc\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.496523 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.669425 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jqc8n" event={"ID":"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08","Type":"ContainerStarted","Data":"b909a16564ac6ba9e8ae273bacca9763dfdc108e4152c971ff982fb1ac98a5ff"} Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.671035 4832 generic.go:334] "Generic (PLEG): container finished" podID="197efe52-c4f9-4868-91c6-eabfa853cc44" containerID="97786754e73cceac12b9c1ea271d6b1d28010c63d0217749f14166828b651d65" exitCode=0 Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.671075 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" event={"ID":"197efe52-c4f9-4868-91c6-eabfa853cc44","Type":"ContainerDied","Data":"97786754e73cceac12b9c1ea271d6b1d28010c63d0217749f14166828b651d65"} Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.728644 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vnbbf"] Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.729729 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.731577 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.731880 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.732425 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.743922 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vnbbf"] Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.779903 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-combined-ca-bundle\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.779987 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2aaa5481-3d69-438a-80be-5511ecc55ddf-etc-swift\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.780034 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-dispersionconf\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.780065 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2aaa5481-3d69-438a-80be-5511ecc55ddf-scripts\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.780084 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-swiftconf\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.780123 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4jw\" (UniqueName: \"kubernetes.io/projected/2aaa5481-3d69-438a-80be-5511ecc55ddf-kube-api-access-rn4jw\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.780163 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2aaa5481-3d69-438a-80be-5511ecc55ddf-ring-data-devices\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.881193 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-combined-ca-bundle\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.881263 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2aaa5481-3d69-438a-80be-5511ecc55ddf-etc-swift\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.881311 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-dispersionconf\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.881344 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2aaa5481-3d69-438a-80be-5511ecc55ddf-scripts\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.881367 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-swiftconf\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.881429 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn4jw\" (UniqueName: \"kubernetes.io/projected/2aaa5481-3d69-438a-80be-5511ecc55ddf-kube-api-access-rn4jw\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.881462 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2aaa5481-3d69-438a-80be-5511ecc55ddf-ring-data-devices\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.881901 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2aaa5481-3d69-438a-80be-5511ecc55ddf-etc-swift\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.882332 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2aaa5481-3d69-438a-80be-5511ecc55ddf-ring-data-devices\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.882338 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2aaa5481-3d69-438a-80be-5511ecc55ddf-scripts\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.884805 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-dispersionconf\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.885138 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-swiftconf\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.892818 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-combined-ca-bundle\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.899494 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn4jw\" (UniqueName: \"kubernetes.io/projected/2aaa5481-3d69-438a-80be-5511ecc55ddf-kube-api-access-rn4jw\") pod \"swift-ring-rebalance-vnbbf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:33 crc kubenswrapper[4832]: I1204 06:27:33.982682 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:33 crc kubenswrapper[4832]: E1204 06:27:33.982885 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 04 06:27:33 crc kubenswrapper[4832]: E1204 06:27:33.983124 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 04 06:27:33 crc kubenswrapper[4832]: E1204 06:27:33.983203 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift podName:5889bafa-1999-43e3-846b-234db0db6e83 nodeName:}" failed. No retries permitted until 2025-12-04 06:27:34.983179803 +0000 UTC m=+1110.595997509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift") pod "swift-storage-0" (UID: "5889bafa-1999-43e3-846b-234db0db6e83") : configmap "swift-ring-files" not found Dec 04 06:27:34 crc kubenswrapper[4832]: I1204 06:27:34.048969 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:34 crc kubenswrapper[4832]: I1204 06:27:34.509755 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vnbbf"] Dec 04 06:27:34 crc kubenswrapper[4832]: I1204 06:27:34.686013 4832 generic.go:334] "Generic (PLEG): container finished" podID="3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" containerID="30d4b491d2bef5aa9d4e743bedad058c3203e3e0186bbb68b930845e886e4571" exitCode=0 Dec 04 06:27:34 crc kubenswrapper[4832]: I1204 06:27:34.686117 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jqc8n" event={"ID":"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08","Type":"ContainerDied","Data":"30d4b491d2bef5aa9d4e743bedad058c3203e3e0186bbb68b930845e886e4571"} Dec 04 06:27:34 crc kubenswrapper[4832]: I1204 06:27:34.687624 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9841a1c2-83f5-475b-8180-b1e9cd13467b","Type":"ContainerStarted","Data":"25eed0040bcac1bb4f13974f895448298c5ce487944bdc37c0add8d2d160bd07"} Dec 04 06:27:34 crc kubenswrapper[4832]: I1204 06:27:34.690103 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vnbbf" event={"ID":"2aaa5481-3d69-438a-80be-5511ecc55ddf","Type":"ContainerStarted","Data":"97c337a64675bced6a9e657ec77a5600d832ff83f3cb512b695106c49728ec87"} Dec 04 06:27:34 crc kubenswrapper[4832]: I1204 06:27:34.758268 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371990.09653 podStartE2EDuration="46.758245804s" podCreationTimestamp="2025-12-04 06:26:48 +0000 UTC" firstStartedPulling="2025-12-04 06:26:50.719368918 +0000 UTC m=+1066.332186624" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:27:34.751795953 +0000 UTC m=+1110.364613659" watchObservedRunningTime="2025-12-04 06:27:34.758245804 +0000 UTC m=+1110.371063500" Dec 04 06:27:34 crc kubenswrapper[4832]: I1204 06:27:34.879136 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.000778 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz6fp\" (UniqueName: \"kubernetes.io/projected/197efe52-c4f9-4868-91c6-eabfa853cc44-kube-api-access-mz6fp\") pod \"197efe52-c4f9-4868-91c6-eabfa853cc44\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.000855 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-dns-svc\") pod \"197efe52-c4f9-4868-91c6-eabfa853cc44\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.000924 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-config\") pod \"197efe52-c4f9-4868-91c6-eabfa853cc44\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.000980 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-ovsdbserver-nb\") pod \"197efe52-c4f9-4868-91c6-eabfa853cc44\" (UID: \"197efe52-c4f9-4868-91c6-eabfa853cc44\") " Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.001437 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:35 crc kubenswrapper[4832]: E1204 06:27:35.001597 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 04 06:27:35 crc kubenswrapper[4832]: E1204 06:27:35.001648 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 04 06:27:35 crc kubenswrapper[4832]: E1204 06:27:35.001726 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift podName:5889bafa-1999-43e3-846b-234db0db6e83 nodeName:}" failed. No retries permitted until 2025-12-04 06:27:37.001685714 +0000 UTC m=+1112.614503430 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift") pod "swift-storage-0" (UID: "5889bafa-1999-43e3-846b-234db0db6e83") : configmap "swift-ring-files" not found Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.018213 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/197efe52-c4f9-4868-91c6-eabfa853cc44-kube-api-access-mz6fp" (OuterVolumeSpecName: "kube-api-access-mz6fp") pod "197efe52-c4f9-4868-91c6-eabfa853cc44" (UID: "197efe52-c4f9-4868-91c6-eabfa853cc44"). InnerVolumeSpecName "kube-api-access-mz6fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.045647 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "197efe52-c4f9-4868-91c6-eabfa853cc44" (UID: "197efe52-c4f9-4868-91c6-eabfa853cc44"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.045948 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-config" (OuterVolumeSpecName: "config") pod "197efe52-c4f9-4868-91c6-eabfa853cc44" (UID: "197efe52-c4f9-4868-91c6-eabfa853cc44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.056899 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "197efe52-c4f9-4868-91c6-eabfa853cc44" (UID: "197efe52-c4f9-4868-91c6-eabfa853cc44"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.103267 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.103316 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.103326 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/197efe52-c4f9-4868-91c6-eabfa853cc44-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.103337 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz6fp\" (UniqueName: \"kubernetes.io/projected/197efe52-c4f9-4868-91c6-eabfa853cc44-kube-api-access-mz6fp\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.362731 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.362797 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.362851 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.364134 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9320b2ca718b6f93f88166b331265dcdcf00ab62d4761ea9c83e290c8013b61"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.364205 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://f9320b2ca718b6f93f88166b331265dcdcf00ab62d4761ea9c83e290c8013b61" gracePeriod=600 Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.729063 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="f9320b2ca718b6f93f88166b331265dcdcf00ab62d4761ea9c83e290c8013b61" exitCode=0 Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.729439 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"f9320b2ca718b6f93f88166b331265dcdcf00ab62d4761ea9c83e290c8013b61"} Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.729472 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"348e974629646d54fa0c54ee820cfb4880e34f9731c9b205d4b99c38588e1db7"} Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.729487 4832 scope.go:117] "RemoveContainer" containerID="16a2a9ef3e62675c85662671dfe30288c81082d91cc1c7e3a8b0d7e2b9dfbee1" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.732340 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jqc8n" event={"ID":"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08","Type":"ContainerStarted","Data":"20420f08dfa209d75bae24fcf9408dc638ca0c410550ae1c14cfc26ae3158132"} Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.732853 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.734830 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" event={"ID":"197efe52-c4f9-4868-91c6-eabfa853cc44","Type":"ContainerDied","Data":"0e3a584579964bed661936c6fc4ff7a4b86c93606f962be46bfd06f30e9f25cd"} Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.734907 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8kfws" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.775495 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-jqc8n" podStartSLOduration=3.775476564 podStartE2EDuration="3.775476564s" podCreationTimestamp="2025-12-04 06:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:27:35.762370005 +0000 UTC m=+1111.375187711" watchObservedRunningTime="2025-12-04 06:27:35.775476564 +0000 UTC m=+1111.388294270" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.783347 4832 scope.go:117] "RemoveContainer" containerID="97786754e73cceac12b9c1ea271d6b1d28010c63d0217749f14166828b651d65" Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.788295 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8kfws"] Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.793476 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8kfws"] Dec 04 06:27:35 crc kubenswrapper[4832]: I1204 06:27:35.806139 4832 scope.go:117] "RemoveContainer" containerID="1411d3e7ce6ddef8df6e80a4f4dcf6425a4ec7ce0cf49f652fa41e8480d19268" Dec 04 06:27:36 crc kubenswrapper[4832]: I1204 06:27:36.724297 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="197efe52-c4f9-4868-91c6-eabfa853cc44" path="/var/lib/kubelet/pods/197efe52-c4f9-4868-91c6-eabfa853cc44/volumes" Dec 04 06:27:37 crc kubenswrapper[4832]: I1204 06:27:37.054800 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:37 crc kubenswrapper[4832]: E1204 06:27:37.055172 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 04 06:27:37 crc kubenswrapper[4832]: E1204 06:27:37.055191 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 04 06:27:37 crc kubenswrapper[4832]: E1204 06:27:37.055247 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift podName:5889bafa-1999-43e3-846b-234db0db6e83 nodeName:}" failed. No retries permitted until 2025-12-04 06:27:41.055225892 +0000 UTC m=+1116.668043608 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift") pod "swift-storage-0" (UID: "5889bafa-1999-43e3-846b-234db0db6e83") : configmap "swift-ring-files" not found Dec 04 06:27:38 crc kubenswrapper[4832]: I1204 06:27:38.367621 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:39 crc kubenswrapper[4832]: I1204 06:27:39.128679 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 04 06:27:39 crc kubenswrapper[4832]: I1204 06:27:39.129039 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 04 06:27:39 crc kubenswrapper[4832]: I1204 06:27:39.213006 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 04 06:27:39 crc kubenswrapper[4832]: I1204 06:27:39.764553 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 04 06:27:39 crc kubenswrapper[4832]: I1204 06:27:39.764621 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 04 06:27:39 crc kubenswrapper[4832]: I1204 06:27:39.841022 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.489285 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-r66tn"] Dec 04 06:27:40 crc kubenswrapper[4832]: E1204 06:27:40.489963 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197efe52-c4f9-4868-91c6-eabfa853cc44" containerName="init" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.489985 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="197efe52-c4f9-4868-91c6-eabfa853cc44" containerName="init" Dec 04 06:27:40 crc kubenswrapper[4832]: E1204 06:27:40.490002 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197efe52-c4f9-4868-91c6-eabfa853cc44" containerName="dnsmasq-dns" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.490009 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="197efe52-c4f9-4868-91c6-eabfa853cc44" containerName="dnsmasq-dns" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.490204 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="197efe52-c4f9-4868-91c6-eabfa853cc44" containerName="dnsmasq-dns" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.490997 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r66tn" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.501114 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-r66tn"] Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.526883 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e3e1243-4bec-4c31-9bf8-1d7619986a47-operator-scripts\") pod \"placement-db-create-r66tn\" (UID: \"1e3e1243-4bec-4c31-9bf8-1d7619986a47\") " pod="openstack/placement-db-create-r66tn" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.527010 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q88kv\" (UniqueName: \"kubernetes.io/projected/1e3e1243-4bec-4c31-9bf8-1d7619986a47-kube-api-access-q88kv\") pod \"placement-db-create-r66tn\" (UID: \"1e3e1243-4bec-4c31-9bf8-1d7619986a47\") " pod="openstack/placement-db-create-r66tn" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.594189 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e659-account-create-update-dr4hp"] Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.595520 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e659-account-create-update-dr4hp" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.597288 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.602934 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e659-account-create-update-dr4hp"] Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.629051 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa677da2-7506-423d-9889-cfd73be70e99-operator-scripts\") pod \"placement-e659-account-create-update-dr4hp\" (UID: \"aa677da2-7506-423d-9889-cfd73be70e99\") " pod="openstack/placement-e659-account-create-update-dr4hp" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.629103 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brnbx\" (UniqueName: \"kubernetes.io/projected/aa677da2-7506-423d-9889-cfd73be70e99-kube-api-access-brnbx\") pod \"placement-e659-account-create-update-dr4hp\" (UID: \"aa677da2-7506-423d-9889-cfd73be70e99\") " pod="openstack/placement-e659-account-create-update-dr4hp" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.629137 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e3e1243-4bec-4c31-9bf8-1d7619986a47-operator-scripts\") pod \"placement-db-create-r66tn\" (UID: \"1e3e1243-4bec-4c31-9bf8-1d7619986a47\") " pod="openstack/placement-db-create-r66tn" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.629448 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q88kv\" (UniqueName: \"kubernetes.io/projected/1e3e1243-4bec-4c31-9bf8-1d7619986a47-kube-api-access-q88kv\") pod \"placement-db-create-r66tn\" (UID: \"1e3e1243-4bec-4c31-9bf8-1d7619986a47\") " pod="openstack/placement-db-create-r66tn" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.629865 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e3e1243-4bec-4c31-9bf8-1d7619986a47-operator-scripts\") pod \"placement-db-create-r66tn\" (UID: \"1e3e1243-4bec-4c31-9bf8-1d7619986a47\") " pod="openstack/placement-db-create-r66tn" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.657137 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q88kv\" (UniqueName: \"kubernetes.io/projected/1e3e1243-4bec-4c31-9bf8-1d7619986a47-kube-api-access-q88kv\") pod \"placement-db-create-r66tn\" (UID: \"1e3e1243-4bec-4c31-9bf8-1d7619986a47\") " pod="openstack/placement-db-create-r66tn" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.730796 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa677da2-7506-423d-9889-cfd73be70e99-operator-scripts\") pod \"placement-e659-account-create-update-dr4hp\" (UID: \"aa677da2-7506-423d-9889-cfd73be70e99\") " pod="openstack/placement-e659-account-create-update-dr4hp" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.730865 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brnbx\" (UniqueName: \"kubernetes.io/projected/aa677da2-7506-423d-9889-cfd73be70e99-kube-api-access-brnbx\") pod \"placement-e659-account-create-update-dr4hp\" (UID: \"aa677da2-7506-423d-9889-cfd73be70e99\") " pod="openstack/placement-e659-account-create-update-dr4hp" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.731593 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa677da2-7506-423d-9889-cfd73be70e99-operator-scripts\") pod \"placement-e659-account-create-update-dr4hp\" (UID: \"aa677da2-7506-423d-9889-cfd73be70e99\") " pod="openstack/placement-e659-account-create-update-dr4hp" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.750812 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brnbx\" (UniqueName: \"kubernetes.io/projected/aa677da2-7506-423d-9889-cfd73be70e99-kube-api-access-brnbx\") pod \"placement-e659-account-create-update-dr4hp\" (UID: \"aa677da2-7506-423d-9889-cfd73be70e99\") " pod="openstack/placement-e659-account-create-update-dr4hp" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.779575 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vnbbf" event={"ID":"2aaa5481-3d69-438a-80be-5511ecc55ddf","Type":"ContainerStarted","Data":"a73935f70471abb74f85296052780e4d9bed6bbfad00023cd1228801a91a4e6e"} Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.802349 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-vnbbf" podStartSLOduration=1.974511254 podStartE2EDuration="7.802324929s" podCreationTimestamp="2025-12-04 06:27:33 +0000 UTC" firstStartedPulling="2025-12-04 06:27:34.518000714 +0000 UTC m=+1110.130818430" lastFinishedPulling="2025-12-04 06:27:40.345814389 +0000 UTC m=+1115.958632105" observedRunningTime="2025-12-04 06:27:40.796951974 +0000 UTC m=+1116.409769680" watchObservedRunningTime="2025-12-04 06:27:40.802324929 +0000 UTC m=+1116.415142635" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.816601 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r66tn" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.829253 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.904793 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 04 06:27:40 crc kubenswrapper[4832]: I1204 06:27:40.921792 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e659-account-create-update-dr4hp" Dec 04 06:27:41 crc kubenswrapper[4832]: I1204 06:27:41.138579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:41 crc kubenswrapper[4832]: E1204 06:27:41.139197 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 04 06:27:41 crc kubenswrapper[4832]: E1204 06:27:41.139217 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 04 06:27:41 crc kubenswrapper[4832]: E1204 06:27:41.139258 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift podName:5889bafa-1999-43e3-846b-234db0db6e83 nodeName:}" failed. No retries permitted until 2025-12-04 06:27:49.139243751 +0000 UTC m=+1124.752061457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift") pod "swift-storage-0" (UID: "5889bafa-1999-43e3-846b-234db0db6e83") : configmap "swift-ring-files" not found Dec 04 06:27:41 crc kubenswrapper[4832]: I1204 06:27:41.309582 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-r66tn"] Dec 04 06:27:41 crc kubenswrapper[4832]: W1204 06:27:41.315957 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e3e1243_4bec_4c31_9bf8_1d7619986a47.slice/crio-b23a7e242b3e91bb8c01dcd9f253f203a577c1f1da0cb80e527b789a5d27c5d9 WatchSource:0}: Error finding container b23a7e242b3e91bb8c01dcd9f253f203a577c1f1da0cb80e527b789a5d27c5d9: Status 404 returned error can't find the container with id b23a7e242b3e91bb8c01dcd9f253f203a577c1f1da0cb80e527b789a5d27c5d9 Dec 04 06:27:41 crc kubenswrapper[4832]: I1204 06:27:41.432048 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e659-account-create-update-dr4hp"] Dec 04 06:27:41 crc kubenswrapper[4832]: W1204 06:27:41.435004 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa677da2_7506_423d_9889_cfd73be70e99.slice/crio-646333d21a8981fb0af061297b37926adf591349acad9e4d438990d3707705d4 WatchSource:0}: Error finding container 646333d21a8981fb0af061297b37926adf591349acad9e4d438990d3707705d4: Status 404 returned error can't find the container with id 646333d21a8981fb0af061297b37926adf591349acad9e4d438990d3707705d4 Dec 04 06:27:41 crc kubenswrapper[4832]: I1204 06:27:41.788861 4832 generic.go:334] "Generic (PLEG): container finished" podID="1e3e1243-4bec-4c31-9bf8-1d7619986a47" containerID="6badc3dd0b21274b852f80e49c4cb1e042c8eb10c3d6806fea01c605484e2442" exitCode=0 Dec 04 06:27:41 crc kubenswrapper[4832]: I1204 06:27:41.788946 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r66tn" event={"ID":"1e3e1243-4bec-4c31-9bf8-1d7619986a47","Type":"ContainerDied","Data":"6badc3dd0b21274b852f80e49c4cb1e042c8eb10c3d6806fea01c605484e2442"} Dec 04 06:27:41 crc kubenswrapper[4832]: I1204 06:27:41.788969 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r66tn" event={"ID":"1e3e1243-4bec-4c31-9bf8-1d7619986a47","Type":"ContainerStarted","Data":"b23a7e242b3e91bb8c01dcd9f253f203a577c1f1da0cb80e527b789a5d27c5d9"} Dec 04 06:27:41 crc kubenswrapper[4832]: I1204 06:27:41.793140 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e659-account-create-update-dr4hp" event={"ID":"aa677da2-7506-423d-9889-cfd73be70e99","Type":"ContainerStarted","Data":"ea82e158668bdfc3f97696bdd40f14ba60a710aebeac8b230e72b7b930a8cbe2"} Dec 04 06:27:41 crc kubenswrapper[4832]: I1204 06:27:41.793177 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e659-account-create-update-dr4hp" event={"ID":"aa677da2-7506-423d-9889-cfd73be70e99","Type":"ContainerStarted","Data":"646333d21a8981fb0af061297b37926adf591349acad9e4d438990d3707705d4"} Dec 04 06:27:41 crc kubenswrapper[4832]: I1204 06:27:41.830518 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-e659-account-create-update-dr4hp" podStartSLOduration=1.830498242 podStartE2EDuration="1.830498242s" podCreationTimestamp="2025-12-04 06:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:27:41.826924723 +0000 UTC m=+1117.439742429" watchObservedRunningTime="2025-12-04 06:27:41.830498242 +0000 UTC m=+1117.443315948" Dec 04 06:27:42 crc kubenswrapper[4832]: I1204 06:27:42.390547 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:27:42 crc kubenswrapper[4832]: I1204 06:27:42.450024 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cz855"] Dec 04 06:27:42 crc kubenswrapper[4832]: I1204 06:27:42.450349 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" podUID="d6903f61-c715-4356-9d86-b03a27561821" containerName="dnsmasq-dns" containerID="cri-o://b5f6c649c07bcf709ffb1a982ce4a7997e5ce2c13740d5fe8ae050fd40ec41cc" gracePeriod=10 Dec 04 06:27:42 crc kubenswrapper[4832]: I1204 06:27:42.824923 4832 generic.go:334] "Generic (PLEG): container finished" podID="d6903f61-c715-4356-9d86-b03a27561821" containerID="b5f6c649c07bcf709ffb1a982ce4a7997e5ce2c13740d5fe8ae050fd40ec41cc" exitCode=0 Dec 04 06:27:42 crc kubenswrapper[4832]: I1204 06:27:42.825048 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" event={"ID":"d6903f61-c715-4356-9d86-b03a27561821","Type":"ContainerDied","Data":"b5f6c649c07bcf709ffb1a982ce4a7997e5ce2c13740d5fe8ae050fd40ec41cc"} Dec 04 06:27:42 crc kubenswrapper[4832]: I1204 06:27:42.826959 4832 generic.go:334] "Generic (PLEG): container finished" podID="aa677da2-7506-423d-9889-cfd73be70e99" containerID="ea82e158668bdfc3f97696bdd40f14ba60a710aebeac8b230e72b7b930a8cbe2" exitCode=0 Dec 04 06:27:42 crc kubenswrapper[4832]: I1204 06:27:42.828802 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e659-account-create-update-dr4hp" event={"ID":"aa677da2-7506-423d-9889-cfd73be70e99","Type":"ContainerDied","Data":"ea82e158668bdfc3f97696bdd40f14ba60a710aebeac8b230e72b7b930a8cbe2"} Dec 04 06:27:42 crc kubenswrapper[4832]: I1204 06:27:42.908053 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.013057 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-config\") pod \"d6903f61-c715-4356-9d86-b03a27561821\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.013283 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-ovsdbserver-sb\") pod \"d6903f61-c715-4356-9d86-b03a27561821\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.014051 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mlbh\" (UniqueName: \"kubernetes.io/projected/d6903f61-c715-4356-9d86-b03a27561821-kube-api-access-5mlbh\") pod \"d6903f61-c715-4356-9d86-b03a27561821\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.014084 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-ovsdbserver-nb\") pod \"d6903f61-c715-4356-9d86-b03a27561821\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.014114 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-dns-svc\") pod \"d6903f61-c715-4356-9d86-b03a27561821\" (UID: \"d6903f61-c715-4356-9d86-b03a27561821\") " Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.052036 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6903f61-c715-4356-9d86-b03a27561821-kube-api-access-5mlbh" (OuterVolumeSpecName: "kube-api-access-5mlbh") pod "d6903f61-c715-4356-9d86-b03a27561821" (UID: "d6903f61-c715-4356-9d86-b03a27561821"). InnerVolumeSpecName "kube-api-access-5mlbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.071109 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6903f61-c715-4356-9d86-b03a27561821" (UID: "d6903f61-c715-4356-9d86-b03a27561821"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.076590 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6903f61-c715-4356-9d86-b03a27561821" (UID: "d6903f61-c715-4356-9d86-b03a27561821"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.085776 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-config" (OuterVolumeSpecName: "config") pod "d6903f61-c715-4356-9d86-b03a27561821" (UID: "d6903f61-c715-4356-9d86-b03a27561821"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.117427 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mlbh\" (UniqueName: \"kubernetes.io/projected/d6903f61-c715-4356-9d86-b03a27561821-kube-api-access-5mlbh\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.117455 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.117465 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.117474 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.138183 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r66tn" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.138577 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6903f61-c715-4356-9d86-b03a27561821" (UID: "d6903f61-c715-4356-9d86-b03a27561821"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.220119 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6903f61-c715-4356-9d86-b03a27561821-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.321700 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e3e1243-4bec-4c31-9bf8-1d7619986a47-operator-scripts\") pod \"1e3e1243-4bec-4c31-9bf8-1d7619986a47\" (UID: \"1e3e1243-4bec-4c31-9bf8-1d7619986a47\") " Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.321790 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q88kv\" (UniqueName: \"kubernetes.io/projected/1e3e1243-4bec-4c31-9bf8-1d7619986a47-kube-api-access-q88kv\") pod \"1e3e1243-4bec-4c31-9bf8-1d7619986a47\" (UID: \"1e3e1243-4bec-4c31-9bf8-1d7619986a47\") " Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.322116 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3e1243-4bec-4c31-9bf8-1d7619986a47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e3e1243-4bec-4c31-9bf8-1d7619986a47" (UID: "1e3e1243-4bec-4c31-9bf8-1d7619986a47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.325629 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3e1243-4bec-4c31-9bf8-1d7619986a47-kube-api-access-q88kv" (OuterVolumeSpecName: "kube-api-access-q88kv") pod "1e3e1243-4bec-4c31-9bf8-1d7619986a47" (UID: "1e3e1243-4bec-4c31-9bf8-1d7619986a47"). InnerVolumeSpecName "kube-api-access-q88kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.364981 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.424159 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e3e1243-4bec-4c31-9bf8-1d7619986a47-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.424219 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q88kv\" (UniqueName: \"kubernetes.io/projected/1e3e1243-4bec-4c31-9bf8-1d7619986a47-kube-api-access-q88kv\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.836244 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r66tn" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.836243 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r66tn" event={"ID":"1e3e1243-4bec-4c31-9bf8-1d7619986a47","Type":"ContainerDied","Data":"b23a7e242b3e91bb8c01dcd9f253f203a577c1f1da0cb80e527b789a5d27c5d9"} Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.836367 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b23a7e242b3e91bb8c01dcd9f253f203a577c1f1da0cb80e527b789a5d27c5d9" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.838582 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.838643 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cz855" event={"ID":"d6903f61-c715-4356-9d86-b03a27561821","Type":"ContainerDied","Data":"b1a972581887f046f6c8422d7b8f619cd1dfd2960b8f5568553307fb699e5f2d"} Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.838682 4832 scope.go:117] "RemoveContainer" containerID="b5f6c649c07bcf709ffb1a982ce4a7997e5ce2c13740d5fe8ae050fd40ec41cc" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.869480 4832 scope.go:117] "RemoveContainer" containerID="5c52ade08a90363b8c24161a517fab17e92eda947e252425001c0651ab0ecbc0" Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.878276 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cz855"] Dec 04 06:27:43 crc kubenswrapper[4832]: I1204 06:27:43.888457 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cz855"] Dec 04 06:27:44 crc kubenswrapper[4832]: I1204 06:27:44.153000 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e659-account-create-update-dr4hp" Dec 04 06:27:44 crc kubenswrapper[4832]: I1204 06:27:44.337629 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa677da2-7506-423d-9889-cfd73be70e99-operator-scripts\") pod \"aa677da2-7506-423d-9889-cfd73be70e99\" (UID: \"aa677da2-7506-423d-9889-cfd73be70e99\") " Dec 04 06:27:44 crc kubenswrapper[4832]: I1204 06:27:44.337876 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brnbx\" (UniqueName: \"kubernetes.io/projected/aa677da2-7506-423d-9889-cfd73be70e99-kube-api-access-brnbx\") pod \"aa677da2-7506-423d-9889-cfd73be70e99\" (UID: \"aa677da2-7506-423d-9889-cfd73be70e99\") " Dec 04 06:27:44 crc kubenswrapper[4832]: I1204 06:27:44.338649 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa677da2-7506-423d-9889-cfd73be70e99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa677da2-7506-423d-9889-cfd73be70e99" (UID: "aa677da2-7506-423d-9889-cfd73be70e99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:44 crc kubenswrapper[4832]: I1204 06:27:44.343077 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa677da2-7506-423d-9889-cfd73be70e99-kube-api-access-brnbx" (OuterVolumeSpecName: "kube-api-access-brnbx") pod "aa677da2-7506-423d-9889-cfd73be70e99" (UID: "aa677da2-7506-423d-9889-cfd73be70e99"). InnerVolumeSpecName "kube-api-access-brnbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:44 crc kubenswrapper[4832]: I1204 06:27:44.439578 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa677da2-7506-423d-9889-cfd73be70e99-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:44 crc kubenswrapper[4832]: I1204 06:27:44.439626 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brnbx\" (UniqueName: \"kubernetes.io/projected/aa677da2-7506-423d-9889-cfd73be70e99-kube-api-access-brnbx\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:44 crc kubenswrapper[4832]: I1204 06:27:44.742724 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6903f61-c715-4356-9d86-b03a27561821" path="/var/lib/kubelet/pods/d6903f61-c715-4356-9d86-b03a27561821/volumes" Dec 04 06:27:44 crc kubenswrapper[4832]: I1204 06:27:44.849471 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e659-account-create-update-dr4hp" event={"ID":"aa677da2-7506-423d-9889-cfd73be70e99","Type":"ContainerDied","Data":"646333d21a8981fb0af061297b37926adf591349acad9e4d438990d3707705d4"} Dec 04 06:27:44 crc kubenswrapper[4832]: I1204 06:27:44.850967 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="646333d21a8981fb0af061297b37926adf591349acad9e4d438990d3707705d4" Dec 04 06:27:44 crc kubenswrapper[4832]: I1204 06:27:44.849717 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e659-account-create-update-dr4hp" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.443363 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-5w6dd"] Dec 04 06:27:45 crc kubenswrapper[4832]: E1204 06:27:45.451276 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6903f61-c715-4356-9d86-b03a27561821" containerName="init" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.451307 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6903f61-c715-4356-9d86-b03a27561821" containerName="init" Dec 04 06:27:45 crc kubenswrapper[4832]: E1204 06:27:45.451315 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa677da2-7506-423d-9889-cfd73be70e99" containerName="mariadb-account-create-update" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.451322 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa677da2-7506-423d-9889-cfd73be70e99" containerName="mariadb-account-create-update" Dec 04 06:27:45 crc kubenswrapper[4832]: E1204 06:27:45.451336 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6903f61-c715-4356-9d86-b03a27561821" containerName="dnsmasq-dns" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.451342 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6903f61-c715-4356-9d86-b03a27561821" containerName="dnsmasq-dns" Dec 04 06:27:45 crc kubenswrapper[4832]: E1204 06:27:45.451361 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e3e1243-4bec-4c31-9bf8-1d7619986a47" containerName="mariadb-database-create" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.451367 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e3e1243-4bec-4c31-9bf8-1d7619986a47" containerName="mariadb-database-create" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.451611 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e3e1243-4bec-4c31-9bf8-1d7619986a47" containerName="mariadb-database-create" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.451634 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa677da2-7506-423d-9889-cfd73be70e99" containerName="mariadb-account-create-update" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.451645 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6903f61-c715-4356-9d86-b03a27561821" containerName="dnsmasq-dns" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.452103 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5w6dd"] Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.452188 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5w6dd" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.559878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78ab66e-ee59-443b-88a1-72bec5167698-operator-scripts\") pod \"glance-db-create-5w6dd\" (UID: \"f78ab66e-ee59-443b-88a1-72bec5167698\") " pod="openstack/glance-db-create-5w6dd" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.559985 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phgws\" (UniqueName: \"kubernetes.io/projected/f78ab66e-ee59-443b-88a1-72bec5167698-kube-api-access-phgws\") pod \"glance-db-create-5w6dd\" (UID: \"f78ab66e-ee59-443b-88a1-72bec5167698\") " pod="openstack/glance-db-create-5w6dd" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.590418 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a629-account-create-update-5cjb2"] Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.591661 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a629-account-create-update-5cjb2" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.599244 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a629-account-create-update-5cjb2"] Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.646595 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.661343 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phgws\" (UniqueName: \"kubernetes.io/projected/f78ab66e-ee59-443b-88a1-72bec5167698-kube-api-access-phgws\") pod \"glance-db-create-5w6dd\" (UID: \"f78ab66e-ee59-443b-88a1-72bec5167698\") " pod="openstack/glance-db-create-5w6dd" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.661703 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78ab66e-ee59-443b-88a1-72bec5167698-operator-scripts\") pod \"glance-db-create-5w6dd\" (UID: \"f78ab66e-ee59-443b-88a1-72bec5167698\") " pod="openstack/glance-db-create-5w6dd" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.662768 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78ab66e-ee59-443b-88a1-72bec5167698-operator-scripts\") pod \"glance-db-create-5w6dd\" (UID: \"f78ab66e-ee59-443b-88a1-72bec5167698\") " pod="openstack/glance-db-create-5w6dd" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.682265 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phgws\" (UniqueName: \"kubernetes.io/projected/f78ab66e-ee59-443b-88a1-72bec5167698-kube-api-access-phgws\") pod \"glance-db-create-5w6dd\" (UID: \"f78ab66e-ee59-443b-88a1-72bec5167698\") " pod="openstack/glance-db-create-5w6dd" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.763541 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20c65d67-949b-41ec-a47a-d9b7959e827f-operator-scripts\") pod \"glance-a629-account-create-update-5cjb2\" (UID: \"20c65d67-949b-41ec-a47a-d9b7959e827f\") " pod="openstack/glance-a629-account-create-update-5cjb2" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.763705 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6gbw\" (UniqueName: \"kubernetes.io/projected/20c65d67-949b-41ec-a47a-d9b7959e827f-kube-api-access-m6gbw\") pod \"glance-a629-account-create-update-5cjb2\" (UID: \"20c65d67-949b-41ec-a47a-d9b7959e827f\") " pod="openstack/glance-a629-account-create-update-5cjb2" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.773033 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5w6dd" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.865318 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6gbw\" (UniqueName: \"kubernetes.io/projected/20c65d67-949b-41ec-a47a-d9b7959e827f-kube-api-access-m6gbw\") pod \"glance-a629-account-create-update-5cjb2\" (UID: \"20c65d67-949b-41ec-a47a-d9b7959e827f\") " pod="openstack/glance-a629-account-create-update-5cjb2" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.866133 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20c65d67-949b-41ec-a47a-d9b7959e827f-operator-scripts\") pod \"glance-a629-account-create-update-5cjb2\" (UID: \"20c65d67-949b-41ec-a47a-d9b7959e827f\") " pod="openstack/glance-a629-account-create-update-5cjb2" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.866959 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20c65d67-949b-41ec-a47a-d9b7959e827f-operator-scripts\") pod \"glance-a629-account-create-update-5cjb2\" (UID: \"20c65d67-949b-41ec-a47a-d9b7959e827f\") " pod="openstack/glance-a629-account-create-update-5cjb2" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.897171 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6gbw\" (UniqueName: \"kubernetes.io/projected/20c65d67-949b-41ec-a47a-d9b7959e827f-kube-api-access-m6gbw\") pod \"glance-a629-account-create-update-5cjb2\" (UID: \"20c65d67-949b-41ec-a47a-d9b7959e827f\") " pod="openstack/glance-a629-account-create-update-5cjb2" Dec 04 06:27:45 crc kubenswrapper[4832]: I1204 06:27:45.970664 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a629-account-create-update-5cjb2" Dec 04 06:27:46 crc kubenswrapper[4832]: I1204 06:27:46.250896 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5w6dd"] Dec 04 06:27:46 crc kubenswrapper[4832]: W1204 06:27:46.254585 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf78ab66e_ee59_443b_88a1_72bec5167698.slice/crio-30e5a88d5c48223c4701f59258ffb461b632057794a840ae591ff216a7064a8f WatchSource:0}: Error finding container 30e5a88d5c48223c4701f59258ffb461b632057794a840ae591ff216a7064a8f: Status 404 returned error can't find the container with id 30e5a88d5c48223c4701f59258ffb461b632057794a840ae591ff216a7064a8f Dec 04 06:27:46 crc kubenswrapper[4832]: I1204 06:27:46.446373 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a629-account-create-update-5cjb2"] Dec 04 06:27:46 crc kubenswrapper[4832]: W1204 06:27:46.448654 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20c65d67_949b_41ec_a47a_d9b7959e827f.slice/crio-f848bc6af6eaf1140eae77bf9e8da0553758a08c9bc077f021e515d5baa405fb WatchSource:0}: Error finding container f848bc6af6eaf1140eae77bf9e8da0553758a08c9bc077f021e515d5baa405fb: Status 404 returned error can't find the container with id f848bc6af6eaf1140eae77bf9e8da0553758a08c9bc077f021e515d5baa405fb Dec 04 06:27:46 crc kubenswrapper[4832]: I1204 06:27:46.905292 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5w6dd" event={"ID":"f78ab66e-ee59-443b-88a1-72bec5167698","Type":"ContainerStarted","Data":"30e5a88d5c48223c4701f59258ffb461b632057794a840ae591ff216a7064a8f"} Dec 04 06:27:46 crc kubenswrapper[4832]: I1204 06:27:46.907248 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a629-account-create-update-5cjb2" event={"ID":"20c65d67-949b-41ec-a47a-d9b7959e827f","Type":"ContainerStarted","Data":"f848bc6af6eaf1140eae77bf9e8da0553758a08c9bc077f021e515d5baa405fb"} Dec 04 06:27:47 crc kubenswrapper[4832]: I1204 06:27:47.917042 4832 generic.go:334] "Generic (PLEG): container finished" podID="20c65d67-949b-41ec-a47a-d9b7959e827f" containerID="a62149d52621256525ca365a3ec543c430258f0a65c712fbce3f1d19b4bfdff8" exitCode=0 Dec 04 06:27:47 crc kubenswrapper[4832]: I1204 06:27:47.917414 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a629-account-create-update-5cjb2" event={"ID":"20c65d67-949b-41ec-a47a-d9b7959e827f","Type":"ContainerDied","Data":"a62149d52621256525ca365a3ec543c430258f0a65c712fbce3f1d19b4bfdff8"} Dec 04 06:27:47 crc kubenswrapper[4832]: I1204 06:27:47.920182 4832 generic.go:334] "Generic (PLEG): container finished" podID="f78ab66e-ee59-443b-88a1-72bec5167698" containerID="0b4874334a0366544fd6b93e48d3349541599526c0d7d7af447f93e9dd184deb" exitCode=0 Dec 04 06:27:47 crc kubenswrapper[4832]: I1204 06:27:47.920225 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5w6dd" event={"ID":"f78ab66e-ee59-443b-88a1-72bec5167698","Type":"ContainerDied","Data":"0b4874334a0366544fd6b93e48d3349541599526c0d7d7af447f93e9dd184deb"} Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.155042 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:27:49 crc kubenswrapper[4832]: E1204 06:27:49.155840 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 04 06:27:49 crc kubenswrapper[4832]: E1204 06:27:49.155857 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 04 06:27:49 crc kubenswrapper[4832]: E1204 06:27:49.155920 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift podName:5889bafa-1999-43e3-846b-234db0db6e83 nodeName:}" failed. No retries permitted until 2025-12-04 06:28:05.155900014 +0000 UTC m=+1140.768717720 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift") pod "swift-storage-0" (UID: "5889bafa-1999-43e3-846b-234db0db6e83") : configmap "swift-ring-files" not found Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.351181 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5w6dd" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.357287 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a629-account-create-update-5cjb2" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.460990 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20c65d67-949b-41ec-a47a-d9b7959e827f-operator-scripts\") pod \"20c65d67-949b-41ec-a47a-d9b7959e827f\" (UID: \"20c65d67-949b-41ec-a47a-d9b7959e827f\") " Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.461137 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phgws\" (UniqueName: \"kubernetes.io/projected/f78ab66e-ee59-443b-88a1-72bec5167698-kube-api-access-phgws\") pod \"f78ab66e-ee59-443b-88a1-72bec5167698\" (UID: \"f78ab66e-ee59-443b-88a1-72bec5167698\") " Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.461241 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78ab66e-ee59-443b-88a1-72bec5167698-operator-scripts\") pod \"f78ab66e-ee59-443b-88a1-72bec5167698\" (UID: \"f78ab66e-ee59-443b-88a1-72bec5167698\") " Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.461371 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6gbw\" (UniqueName: \"kubernetes.io/projected/20c65d67-949b-41ec-a47a-d9b7959e827f-kube-api-access-m6gbw\") pod \"20c65d67-949b-41ec-a47a-d9b7959e827f\" (UID: \"20c65d67-949b-41ec-a47a-d9b7959e827f\") " Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.462179 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f78ab66e-ee59-443b-88a1-72bec5167698-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f78ab66e-ee59-443b-88a1-72bec5167698" (UID: "f78ab66e-ee59-443b-88a1-72bec5167698"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.462554 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c65d67-949b-41ec-a47a-d9b7959e827f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20c65d67-949b-41ec-a47a-d9b7959e827f" (UID: "20c65d67-949b-41ec-a47a-d9b7959e827f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.468439 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c65d67-949b-41ec-a47a-d9b7959e827f-kube-api-access-m6gbw" (OuterVolumeSpecName: "kube-api-access-m6gbw") pod "20c65d67-949b-41ec-a47a-d9b7959e827f" (UID: "20c65d67-949b-41ec-a47a-d9b7959e827f"). InnerVolumeSpecName "kube-api-access-m6gbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.474089 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f78ab66e-ee59-443b-88a1-72bec5167698-kube-api-access-phgws" (OuterVolumeSpecName: "kube-api-access-phgws") pod "f78ab66e-ee59-443b-88a1-72bec5167698" (UID: "f78ab66e-ee59-443b-88a1-72bec5167698"). InnerVolumeSpecName "kube-api-access-phgws". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.563260 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f78ab66e-ee59-443b-88a1-72bec5167698-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.563295 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6gbw\" (UniqueName: \"kubernetes.io/projected/20c65d67-949b-41ec-a47a-d9b7959e827f-kube-api-access-m6gbw\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.563306 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20c65d67-949b-41ec-a47a-d9b7959e827f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.563315 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phgws\" (UniqueName: \"kubernetes.io/projected/f78ab66e-ee59-443b-88a1-72bec5167698-kube-api-access-phgws\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.723708 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9g9d7"] Dec 04 06:27:49 crc kubenswrapper[4832]: E1204 06:27:49.724296 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78ab66e-ee59-443b-88a1-72bec5167698" containerName="mariadb-database-create" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.724312 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78ab66e-ee59-443b-88a1-72bec5167698" containerName="mariadb-database-create" Dec 04 06:27:49 crc kubenswrapper[4832]: E1204 06:27:49.724349 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c65d67-949b-41ec-a47a-d9b7959e827f" containerName="mariadb-account-create-update" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.724356 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c65d67-949b-41ec-a47a-d9b7959e827f" containerName="mariadb-account-create-update" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.724556 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c65d67-949b-41ec-a47a-d9b7959e827f" containerName="mariadb-account-create-update" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.724575 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78ab66e-ee59-443b-88a1-72bec5167698" containerName="mariadb-database-create" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.725272 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9g9d7" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.734795 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9g9d7"] Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.778003 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b-operator-scripts\") pod \"keystone-db-create-9g9d7\" (UID: \"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b\") " pod="openstack/keystone-db-create-9g9d7" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.778084 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thgrc\" (UniqueName: \"kubernetes.io/projected/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b-kube-api-access-thgrc\") pod \"keystone-db-create-9g9d7\" (UID: \"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b\") " pod="openstack/keystone-db-create-9g9d7" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.841221 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-722d-account-create-update-hps7x"] Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.842290 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-722d-account-create-update-hps7x" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.844587 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.853895 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-722d-account-create-update-hps7x"] Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.879813 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b-operator-scripts\") pod \"keystone-db-create-9g9d7\" (UID: \"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b\") " pod="openstack/keystone-db-create-9g9d7" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.880205 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thgrc\" (UniqueName: \"kubernetes.io/projected/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b-kube-api-access-thgrc\") pod \"keystone-db-create-9g9d7\" (UID: \"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b\") " pod="openstack/keystone-db-create-9g9d7" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.881093 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b-operator-scripts\") pod \"keystone-db-create-9g9d7\" (UID: \"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b\") " pod="openstack/keystone-db-create-9g9d7" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.900712 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thgrc\" (UniqueName: \"kubernetes.io/projected/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b-kube-api-access-thgrc\") pod \"keystone-db-create-9g9d7\" (UID: \"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b\") " pod="openstack/keystone-db-create-9g9d7" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.938463 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a629-account-create-update-5cjb2" event={"ID":"20c65d67-949b-41ec-a47a-d9b7959e827f","Type":"ContainerDied","Data":"f848bc6af6eaf1140eae77bf9e8da0553758a08c9bc077f021e515d5baa405fb"} Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.938500 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a629-account-create-update-5cjb2" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.938509 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f848bc6af6eaf1140eae77bf9e8da0553758a08c9bc077f021e515d5baa405fb" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.940006 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5w6dd" event={"ID":"f78ab66e-ee59-443b-88a1-72bec5167698","Type":"ContainerDied","Data":"30e5a88d5c48223c4701f59258ffb461b632057794a840ae591ff216a7064a8f"} Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.940033 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30e5a88d5c48223c4701f59258ffb461b632057794a840ae591ff216a7064a8f" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.940058 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5w6dd" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.982506 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2155b199-49bd-41f9-9253-dd3f6d786bdb-operator-scripts\") pod \"keystone-722d-account-create-update-hps7x\" (UID: \"2155b199-49bd-41f9-9253-dd3f6d786bdb\") " pod="openstack/keystone-722d-account-create-update-hps7x" Dec 04 06:27:49 crc kubenswrapper[4832]: I1204 06:27:49.982596 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlcfw\" (UniqueName: \"kubernetes.io/projected/2155b199-49bd-41f9-9253-dd3f6d786bdb-kube-api-access-nlcfw\") pod \"keystone-722d-account-create-update-hps7x\" (UID: \"2155b199-49bd-41f9-9253-dd3f6d786bdb\") " pod="openstack/keystone-722d-account-create-update-hps7x" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.042889 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9g9d7" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.084406 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2155b199-49bd-41f9-9253-dd3f6d786bdb-operator-scripts\") pod \"keystone-722d-account-create-update-hps7x\" (UID: \"2155b199-49bd-41f9-9253-dd3f6d786bdb\") " pod="openstack/keystone-722d-account-create-update-hps7x" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.084889 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlcfw\" (UniqueName: \"kubernetes.io/projected/2155b199-49bd-41f9-9253-dd3f6d786bdb-kube-api-access-nlcfw\") pod \"keystone-722d-account-create-update-hps7x\" (UID: \"2155b199-49bd-41f9-9253-dd3f6d786bdb\") " pod="openstack/keystone-722d-account-create-update-hps7x" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.085384 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2155b199-49bd-41f9-9253-dd3f6d786bdb-operator-scripts\") pod \"keystone-722d-account-create-update-hps7x\" (UID: \"2155b199-49bd-41f9-9253-dd3f6d786bdb\") " pod="openstack/keystone-722d-account-create-update-hps7x" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.110713 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlcfw\" (UniqueName: \"kubernetes.io/projected/2155b199-49bd-41f9-9253-dd3f6d786bdb-kube-api-access-nlcfw\") pod \"keystone-722d-account-create-update-hps7x\" (UID: \"2155b199-49bd-41f9-9253-dd3f6d786bdb\") " pod="openstack/keystone-722d-account-create-update-hps7x" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.162294 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-722d-account-create-update-hps7x" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.477384 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9g9d7"] Dec 04 06:27:50 crc kubenswrapper[4832]: W1204 06:27:50.522281 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2cca739_d2fb_4f7e_aec3_89d3bf4d998b.slice/crio-e76de645a982b819539f9e91e255a37301f20dcd5788a02c2fac073e7d06f035 WatchSource:0}: Error finding container e76de645a982b819539f9e91e255a37301f20dcd5788a02c2fac073e7d06f035: Status 404 returned error can't find the container with id e76de645a982b819539f9e91e255a37301f20dcd5788a02c2fac073e7d06f035 Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.659309 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-722d-account-create-update-hps7x"] Dec 04 06:27:50 crc kubenswrapper[4832]: W1204 06:27:50.674322 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2155b199_49bd_41f9_9253_dd3f6d786bdb.slice/crio-f4638546d6c6085f4091f794a18898ade64090862db975573d9aa54270a025f6 WatchSource:0}: Error finding container f4638546d6c6085f4091f794a18898ade64090862db975573d9aa54270a025f6: Status 404 returned error can't find the container with id f4638546d6c6085f4091f794a18898ade64090862db975573d9aa54270a025f6 Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.807011 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-lckgk"] Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.808226 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.812478 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.813702 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5mhp8" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.817468 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-lckgk"] Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.903075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-config-data\") pod \"glance-db-sync-lckgk\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.903158 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-combined-ca-bundle\") pod \"glance-db-sync-lckgk\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.903194 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsscn\" (UniqueName: \"kubernetes.io/projected/8f4298e5-b22d-4f71-b682-87539fc2bae7-kube-api-access-dsscn\") pod \"glance-db-sync-lckgk\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.903239 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-db-sync-config-data\") pod \"glance-db-sync-lckgk\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.950801 4832 generic.go:334] "Generic (PLEG): container finished" podID="0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" containerID="50c4b8bc59996799ec0754cae7a6f82efda61b06d40f90fa21ecb25db8188717" exitCode=0 Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.950914 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3","Type":"ContainerDied","Data":"50c4b8bc59996799ec0754cae7a6f82efda61b06d40f90fa21ecb25db8188717"} Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.952812 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9g9d7" event={"ID":"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b","Type":"ContainerStarted","Data":"e76de645a982b819539f9e91e255a37301f20dcd5788a02c2fac073e7d06f035"} Dec 04 06:27:50 crc kubenswrapper[4832]: I1204 06:27:50.953865 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-722d-account-create-update-hps7x" event={"ID":"2155b199-49bd-41f9-9253-dd3f6d786bdb","Type":"ContainerStarted","Data":"f4638546d6c6085f4091f794a18898ade64090862db975573d9aa54270a025f6"} Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.004655 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-config-data\") pod \"glance-db-sync-lckgk\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.004724 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-combined-ca-bundle\") pod \"glance-db-sync-lckgk\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.004755 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsscn\" (UniqueName: \"kubernetes.io/projected/8f4298e5-b22d-4f71-b682-87539fc2bae7-kube-api-access-dsscn\") pod \"glance-db-sync-lckgk\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.004801 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-db-sync-config-data\") pod \"glance-db-sync-lckgk\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.009718 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-config-data\") pod \"glance-db-sync-lckgk\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.015030 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-combined-ca-bundle\") pod \"glance-db-sync-lckgk\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.016370 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-db-sync-config-data\") pod \"glance-db-sync-lckgk\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.024058 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsscn\" (UniqueName: \"kubernetes.io/projected/8f4298e5-b22d-4f71-b682-87539fc2bae7-kube-api-access-dsscn\") pod \"glance-db-sync-lckgk\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.129095 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lckgk" Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.728822 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-lckgk"] Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.964944 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3","Type":"ContainerStarted","Data":"a4fa05ed35c11a542a8d0eb9a657526f567af5481c9c946931f663f314dd361d"} Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.965261 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.966697 4832 generic.go:334] "Generic (PLEG): container finished" podID="d2cca739-d2fb-4f7e-aec3-89d3bf4d998b" containerID="4ada567d7943587b983fbb0a1839473fff93aa668f03edc04b3ec401bdb14b19" exitCode=0 Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.966768 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9g9d7" event={"ID":"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b","Type":"ContainerDied","Data":"4ada567d7943587b983fbb0a1839473fff93aa668f03edc04b3ec401bdb14b19"} Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.968422 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lckgk" event={"ID":"8f4298e5-b22d-4f71-b682-87539fc2bae7","Type":"ContainerStarted","Data":"e9eacaeef7b6abf53d798224a96e7f69b5221ae7da349079d898b76fde17caac"} Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.970446 4832 generic.go:334] "Generic (PLEG): container finished" podID="2155b199-49bd-41f9-9253-dd3f6d786bdb" containerID="257e15da5acf5e54dbe3cb0e345d79ce07a92c55b24a13c57254f4701723df4d" exitCode=0 Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.970519 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-722d-account-create-update-hps7x" event={"ID":"2155b199-49bd-41f9-9253-dd3f6d786bdb","Type":"ContainerDied","Data":"257e15da5acf5e54dbe3cb0e345d79ce07a92c55b24a13c57254f4701723df4d"} Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.972080 4832 generic.go:334] "Generic (PLEG): container finished" podID="2aaa5481-3d69-438a-80be-5511ecc55ddf" containerID="a73935f70471abb74f85296052780e4d9bed6bbfad00023cd1228801a91a4e6e" exitCode=0 Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.972130 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vnbbf" event={"ID":"2aaa5481-3d69-438a-80be-5511ecc55ddf","Type":"ContainerDied","Data":"a73935f70471abb74f85296052780e4d9bed6bbfad00023cd1228801a91a4e6e"} Dec 04 06:27:51 crc kubenswrapper[4832]: I1204 06:27:51.993963 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.005757377 podStartE2EDuration="1m5.99393959s" podCreationTimestamp="2025-12-04 06:26:46 +0000 UTC" firstStartedPulling="2025-12-04 06:26:48.767244754 +0000 UTC m=+1064.380062470" lastFinishedPulling="2025-12-04 06:27:15.755426977 +0000 UTC m=+1091.368244683" observedRunningTime="2025-12-04 06:27:51.988542904 +0000 UTC m=+1127.601360620" watchObservedRunningTime="2025-12-04 06:27:51.99393959 +0000 UTC m=+1127.606757296" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.491447 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.497031 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-722d-account-create-update-hps7x" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.503622 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9g9d7" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.606694 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn4jw\" (UniqueName: \"kubernetes.io/projected/2aaa5481-3d69-438a-80be-5511ecc55ddf-kube-api-access-rn4jw\") pod \"2aaa5481-3d69-438a-80be-5511ecc55ddf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.606748 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b-operator-scripts\") pod \"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b\" (UID: \"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b\") " Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.606793 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2aaa5481-3d69-438a-80be-5511ecc55ddf-ring-data-devices\") pod \"2aaa5481-3d69-438a-80be-5511ecc55ddf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.606866 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-dispersionconf\") pod \"2aaa5481-3d69-438a-80be-5511ecc55ddf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.606898 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2aaa5481-3d69-438a-80be-5511ecc55ddf-etc-swift\") pod \"2aaa5481-3d69-438a-80be-5511ecc55ddf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.606930 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlcfw\" (UniqueName: \"kubernetes.io/projected/2155b199-49bd-41f9-9253-dd3f6d786bdb-kube-api-access-nlcfw\") pod \"2155b199-49bd-41f9-9253-dd3f6d786bdb\" (UID: \"2155b199-49bd-41f9-9253-dd3f6d786bdb\") " Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.606977 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-combined-ca-bundle\") pod \"2aaa5481-3d69-438a-80be-5511ecc55ddf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.606997 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-swiftconf\") pod \"2aaa5481-3d69-438a-80be-5511ecc55ddf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.607054 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2aaa5481-3d69-438a-80be-5511ecc55ddf-scripts\") pod \"2aaa5481-3d69-438a-80be-5511ecc55ddf\" (UID: \"2aaa5481-3d69-438a-80be-5511ecc55ddf\") " Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.607104 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2155b199-49bd-41f9-9253-dd3f6d786bdb-operator-scripts\") pod \"2155b199-49bd-41f9-9253-dd3f6d786bdb\" (UID: \"2155b199-49bd-41f9-9253-dd3f6d786bdb\") " Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.607125 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thgrc\" (UniqueName: \"kubernetes.io/projected/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b-kube-api-access-thgrc\") pod \"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b\" (UID: \"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b\") " Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.608625 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2cca739-d2fb-4f7e-aec3-89d3bf4d998b" (UID: "d2cca739-d2fb-4f7e-aec3-89d3bf4d998b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.609113 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aaa5481-3d69-438a-80be-5511ecc55ddf-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2aaa5481-3d69-438a-80be-5511ecc55ddf" (UID: "2aaa5481-3d69-438a-80be-5511ecc55ddf"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.609197 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aaa5481-3d69-438a-80be-5511ecc55ddf-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2aaa5481-3d69-438a-80be-5511ecc55ddf" (UID: "2aaa5481-3d69-438a-80be-5511ecc55ddf"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.610160 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2155b199-49bd-41f9-9253-dd3f6d786bdb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2155b199-49bd-41f9-9253-dd3f6d786bdb" (UID: "2155b199-49bd-41f9-9253-dd3f6d786bdb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.613716 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aaa5481-3d69-438a-80be-5511ecc55ddf-kube-api-access-rn4jw" (OuterVolumeSpecName: "kube-api-access-rn4jw") pod "2aaa5481-3d69-438a-80be-5511ecc55ddf" (UID: "2aaa5481-3d69-438a-80be-5511ecc55ddf"). InnerVolumeSpecName "kube-api-access-rn4jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.613820 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b-kube-api-access-thgrc" (OuterVolumeSpecName: "kube-api-access-thgrc") pod "d2cca739-d2fb-4f7e-aec3-89d3bf4d998b" (UID: "d2cca739-d2fb-4f7e-aec3-89d3bf4d998b"). InnerVolumeSpecName "kube-api-access-thgrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.618597 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2155b199-49bd-41f9-9253-dd3f6d786bdb-kube-api-access-nlcfw" (OuterVolumeSpecName: "kube-api-access-nlcfw") pod "2155b199-49bd-41f9-9253-dd3f6d786bdb" (UID: "2155b199-49bd-41f9-9253-dd3f6d786bdb"). InnerVolumeSpecName "kube-api-access-nlcfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.622036 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2aaa5481-3d69-438a-80be-5511ecc55ddf" (UID: "2aaa5481-3d69-438a-80be-5511ecc55ddf"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.637782 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2aaa5481-3d69-438a-80be-5511ecc55ddf" (UID: "2aaa5481-3d69-438a-80be-5511ecc55ddf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.638729 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2aaa5481-3d69-438a-80be-5511ecc55ddf" (UID: "2aaa5481-3d69-438a-80be-5511ecc55ddf"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.652029 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aaa5481-3d69-438a-80be-5511ecc55ddf-scripts" (OuterVolumeSpecName: "scripts") pod "2aaa5481-3d69-438a-80be-5511ecc55ddf" (UID: "2aaa5481-3d69-438a-80be-5511ecc55ddf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.709899 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.710186 4832 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.710196 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2aaa5481-3d69-438a-80be-5511ecc55ddf-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.710208 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2155b199-49bd-41f9-9253-dd3f6d786bdb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.710217 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thgrc\" (UniqueName: \"kubernetes.io/projected/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b-kube-api-access-thgrc\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.710227 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn4jw\" (UniqueName: \"kubernetes.io/projected/2aaa5481-3d69-438a-80be-5511ecc55ddf-kube-api-access-rn4jw\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.710235 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.710243 4832 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2aaa5481-3d69-438a-80be-5511ecc55ddf-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.710250 4832 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2aaa5481-3d69-438a-80be-5511ecc55ddf-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.710260 4832 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2aaa5481-3d69-438a-80be-5511ecc55ddf-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.710268 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlcfw\" (UniqueName: \"kubernetes.io/projected/2155b199-49bd-41f9-9253-dd3f6d786bdb-kube-api-access-nlcfw\") on node \"crc\" DevicePath \"\"" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.988066 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9g9d7" event={"ID":"d2cca739-d2fb-4f7e-aec3-89d3bf4d998b","Type":"ContainerDied","Data":"e76de645a982b819539f9e91e255a37301f20dcd5788a02c2fac073e7d06f035"} Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.988105 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e76de645a982b819539f9e91e255a37301f20dcd5788a02c2fac073e7d06f035" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.988158 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9g9d7" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.992301 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-722d-account-create-update-hps7x" event={"ID":"2155b199-49bd-41f9-9253-dd3f6d786bdb","Type":"ContainerDied","Data":"f4638546d6c6085f4091f794a18898ade64090862db975573d9aa54270a025f6"} Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.992356 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4638546d6c6085f4091f794a18898ade64090862db975573d9aa54270a025f6" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.992317 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-722d-account-create-update-hps7x" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.993793 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vnbbf" event={"ID":"2aaa5481-3d69-438a-80be-5511ecc55ddf","Type":"ContainerDied","Data":"97c337a64675bced6a9e657ec77a5600d832ff83f3cb512b695106c49728ec87"} Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.993815 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97c337a64675bced6a9e657ec77a5600d832ff83f3cb512b695106c49728ec87" Dec 04 06:27:53 crc kubenswrapper[4832]: I1204 06:27:53.993833 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vnbbf" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.549514 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kcxl8" podUID="6de6fb2f-c87b-41af-8e93-05d7da0fad2a" containerName="ovn-controller" probeResult="failure" output=< Dec 04 06:27:55 crc kubenswrapper[4832]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 04 06:27:55 crc kubenswrapper[4832]: > Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.585377 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.592133 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-m96v7" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.823731 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kcxl8-config-q8tl6"] Dec 04 06:27:55 crc kubenswrapper[4832]: E1204 06:27:55.824109 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2cca739-d2fb-4f7e-aec3-89d3bf4d998b" containerName="mariadb-database-create" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.824127 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2cca739-d2fb-4f7e-aec3-89d3bf4d998b" containerName="mariadb-database-create" Dec 04 06:27:55 crc kubenswrapper[4832]: E1204 06:27:55.824152 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aaa5481-3d69-438a-80be-5511ecc55ddf" containerName="swift-ring-rebalance" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.824163 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aaa5481-3d69-438a-80be-5511ecc55ddf" containerName="swift-ring-rebalance" Dec 04 06:27:55 crc kubenswrapper[4832]: E1204 06:27:55.824186 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2155b199-49bd-41f9-9253-dd3f6d786bdb" containerName="mariadb-account-create-update" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.824196 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2155b199-49bd-41f9-9253-dd3f6d786bdb" containerName="mariadb-account-create-update" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.824407 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2155b199-49bd-41f9-9253-dd3f6d786bdb" containerName="mariadb-account-create-update" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.824444 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2cca739-d2fb-4f7e-aec3-89d3bf4d998b" containerName="mariadb-database-create" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.824470 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aaa5481-3d69-438a-80be-5511ecc55ddf" containerName="swift-ring-rebalance" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.825061 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.830809 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.861676 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-log-ovn\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.861753 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd4zg\" (UniqueName: \"kubernetes.io/projected/42953213-a4b9-4260-8f15-dcf18b1897c8-kube-api-access-hd4zg\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.861781 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42953213-a4b9-4260-8f15-dcf18b1897c8-additional-scripts\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.861856 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-run\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.861923 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42953213-a4b9-4260-8f15-dcf18b1897c8-scripts\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.862143 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-run-ovn\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.880887 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kcxl8-config-q8tl6"] Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.963238 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42953213-a4b9-4260-8f15-dcf18b1897c8-scripts\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.963293 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-run-ovn\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.963338 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-log-ovn\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.963366 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd4zg\" (UniqueName: \"kubernetes.io/projected/42953213-a4b9-4260-8f15-dcf18b1897c8-kube-api-access-hd4zg\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.963386 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42953213-a4b9-4260-8f15-dcf18b1897c8-additional-scripts\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.963616 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-run\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.963697 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-log-ovn\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.963763 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-run-ovn\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.963804 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-run\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.964302 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42953213-a4b9-4260-8f15-dcf18b1897c8-additional-scripts\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.965559 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42953213-a4b9-4260-8f15-dcf18b1897c8-scripts\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:55 crc kubenswrapper[4832]: I1204 06:27:55.981503 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd4zg\" (UniqueName: \"kubernetes.io/projected/42953213-a4b9-4260-8f15-dcf18b1897c8-kube-api-access-hd4zg\") pod \"ovn-controller-kcxl8-config-q8tl6\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:56 crc kubenswrapper[4832]: I1204 06:27:56.147107 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:27:56 crc kubenswrapper[4832]: I1204 06:27:56.732783 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kcxl8-config-q8tl6"] Dec 04 06:27:56 crc kubenswrapper[4832]: W1204 06:27:56.741966 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42953213_a4b9_4260_8f15_dcf18b1897c8.slice/crio-0dadb949cb4a43c8cc1316ad4f80002b04243e32f3495b3c839c24ded0169bc1 WatchSource:0}: Error finding container 0dadb949cb4a43c8cc1316ad4f80002b04243e32f3495b3c839c24ded0169bc1: Status 404 returned error can't find the container with id 0dadb949cb4a43c8cc1316ad4f80002b04243e32f3495b3c839c24ded0169bc1 Dec 04 06:27:57 crc kubenswrapper[4832]: I1204 06:27:57.022542 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kcxl8-config-q8tl6" event={"ID":"42953213-a4b9-4260-8f15-dcf18b1897c8","Type":"ContainerStarted","Data":"0dadb949cb4a43c8cc1316ad4f80002b04243e32f3495b3c839c24ded0169bc1"} Dec 04 06:27:59 crc kubenswrapper[4832]: I1204 06:27:59.040612 4832 generic.go:334] "Generic (PLEG): container finished" podID="1d41c5c2-5373-423b-b14f-00c902111ee3" containerID="acd88f39d1f1df79a40de06ac24a1db1091f12f9d42d95a1aefade8dd94e4663" exitCode=0 Dec 04 06:27:59 crc kubenswrapper[4832]: I1204 06:27:59.040817 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1d41c5c2-5373-423b-b14f-00c902111ee3","Type":"ContainerDied","Data":"acd88f39d1f1df79a40de06ac24a1db1091f12f9d42d95a1aefade8dd94e4663"} Dec 04 06:27:59 crc kubenswrapper[4832]: I1204 06:27:59.043717 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kcxl8-config-q8tl6" event={"ID":"42953213-a4b9-4260-8f15-dcf18b1897c8","Type":"ContainerStarted","Data":"95432ea2ad169a209bd4a9070d45aebb8e572aaf8e25d63c2894ade7aa9bc66f"} Dec 04 06:27:59 crc kubenswrapper[4832]: I1204 06:27:59.135337 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kcxl8-config-q8tl6" podStartSLOduration=4.135309391 podStartE2EDuration="4.135309391s" podCreationTimestamp="2025-12-04 06:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:27:59.127609387 +0000 UTC m=+1134.740427103" watchObservedRunningTime="2025-12-04 06:27:59.135309391 +0000 UTC m=+1134.748127097" Dec 04 06:28:00 crc kubenswrapper[4832]: I1204 06:28:00.053733 4832 generic.go:334] "Generic (PLEG): container finished" podID="42953213-a4b9-4260-8f15-dcf18b1897c8" containerID="95432ea2ad169a209bd4a9070d45aebb8e572aaf8e25d63c2894ade7aa9bc66f" exitCode=0 Dec 04 06:28:00 crc kubenswrapper[4832]: I1204 06:28:00.054143 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kcxl8-config-q8tl6" event={"ID":"42953213-a4b9-4260-8f15-dcf18b1897c8","Type":"ContainerDied","Data":"95432ea2ad169a209bd4a9070d45aebb8e572aaf8e25d63c2894ade7aa9bc66f"} Dec 04 06:28:00 crc kubenswrapper[4832]: I1204 06:28:00.560708 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-kcxl8" Dec 04 06:28:05 crc kubenswrapper[4832]: I1204 06:28:05.240183 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:28:05 crc kubenswrapper[4832]: I1204 06:28:05.257612 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5889bafa-1999-43e3-846b-234db0db6e83-etc-swift\") pod \"swift-storage-0\" (UID: \"5889bafa-1999-43e3-846b-234db0db6e83\") " pod="openstack/swift-storage-0" Dec 04 06:28:05 crc kubenswrapper[4832]: I1204 06:28:05.290894 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 04 06:28:07 crc kubenswrapper[4832]: I1204 06:28:07.974611 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.283173 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-52rzl"] Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.284264 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-52rzl" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.296795 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-52rzl"] Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.463490 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239e8321-436b-4abb-8d3e-9e9dade5f5dd-operator-scripts\") pod \"cinder-db-create-52rzl\" (UID: \"239e8321-436b-4abb-8d3e-9e9dade5f5dd\") " pod="openstack/cinder-db-create-52rzl" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.463993 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w67vc\" (UniqueName: \"kubernetes.io/projected/239e8321-436b-4abb-8d3e-9e9dade5f5dd-kube-api-access-w67vc\") pod \"cinder-db-create-52rzl\" (UID: \"239e8321-436b-4abb-8d3e-9e9dade5f5dd\") " pod="openstack/cinder-db-create-52rzl" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.506461 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-81b7-account-create-update-qwfh6"] Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.508575 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81b7-account-create-update-qwfh6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.512167 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.532812 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-81b7-account-create-update-qwfh6"] Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.565160 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3e29e22-dcdd-42fc-b7ca-412187993b2c-operator-scripts\") pod \"barbican-81b7-account-create-update-qwfh6\" (UID: \"d3e29e22-dcdd-42fc-b7ca-412187993b2c\") " pod="openstack/barbican-81b7-account-create-update-qwfh6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.565343 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpf89\" (UniqueName: \"kubernetes.io/projected/d3e29e22-dcdd-42fc-b7ca-412187993b2c-kube-api-access-wpf89\") pod \"barbican-81b7-account-create-update-qwfh6\" (UID: \"d3e29e22-dcdd-42fc-b7ca-412187993b2c\") " pod="openstack/barbican-81b7-account-create-update-qwfh6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.565413 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w67vc\" (UniqueName: \"kubernetes.io/projected/239e8321-436b-4abb-8d3e-9e9dade5f5dd-kube-api-access-w67vc\") pod \"cinder-db-create-52rzl\" (UID: \"239e8321-436b-4abb-8d3e-9e9dade5f5dd\") " pod="openstack/cinder-db-create-52rzl" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.565443 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239e8321-436b-4abb-8d3e-9e9dade5f5dd-operator-scripts\") pod \"cinder-db-create-52rzl\" (UID: \"239e8321-436b-4abb-8d3e-9e9dade5f5dd\") " pod="openstack/cinder-db-create-52rzl" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.566358 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239e8321-436b-4abb-8d3e-9e9dade5f5dd-operator-scripts\") pod \"cinder-db-create-52rzl\" (UID: \"239e8321-436b-4abb-8d3e-9e9dade5f5dd\") " pod="openstack/cinder-db-create-52rzl" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.589724 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rxzc6"] Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.591291 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rxzc6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.594076 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w67vc\" (UniqueName: \"kubernetes.io/projected/239e8321-436b-4abb-8d3e-9e9dade5f5dd-kube-api-access-w67vc\") pod \"cinder-db-create-52rzl\" (UID: \"239e8321-436b-4abb-8d3e-9e9dade5f5dd\") " pod="openstack/cinder-db-create-52rzl" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.598228 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-93d4-account-create-update-9swbr"] Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.599808 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-93d4-account-create-update-9swbr" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.602351 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.613594 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rxzc6"] Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.626375 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-93d4-account-create-update-9swbr"] Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.674281 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3e29e22-dcdd-42fc-b7ca-412187993b2c-operator-scripts\") pod \"barbican-81b7-account-create-update-qwfh6\" (UID: \"d3e29e22-dcdd-42fc-b7ca-412187993b2c\") " pod="openstack/barbican-81b7-account-create-update-qwfh6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.674364 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97a78092-fac6-45b5-96a8-acfc47ff879e-operator-scripts\") pod \"barbican-db-create-rxzc6\" (UID: \"97a78092-fac6-45b5-96a8-acfc47ff879e\") " pod="openstack/barbican-db-create-rxzc6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.674465 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph9nb\" (UniqueName: \"kubernetes.io/projected/97504893-0aed-473e-8297-aef920fc6503-kube-api-access-ph9nb\") pod \"cinder-93d4-account-create-update-9swbr\" (UID: \"97504893-0aed-473e-8297-aef920fc6503\") " pod="openstack/cinder-93d4-account-create-update-9swbr" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.674491 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97504893-0aed-473e-8297-aef920fc6503-operator-scripts\") pod \"cinder-93d4-account-create-update-9swbr\" (UID: \"97504893-0aed-473e-8297-aef920fc6503\") " pod="openstack/cinder-93d4-account-create-update-9swbr" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.674519 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpf89\" (UniqueName: \"kubernetes.io/projected/d3e29e22-dcdd-42fc-b7ca-412187993b2c-kube-api-access-wpf89\") pod \"barbican-81b7-account-create-update-qwfh6\" (UID: \"d3e29e22-dcdd-42fc-b7ca-412187993b2c\") " pod="openstack/barbican-81b7-account-create-update-qwfh6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.674553 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx55w\" (UniqueName: \"kubernetes.io/projected/97a78092-fac6-45b5-96a8-acfc47ff879e-kube-api-access-zx55w\") pod \"barbican-db-create-rxzc6\" (UID: \"97a78092-fac6-45b5-96a8-acfc47ff879e\") " pod="openstack/barbican-db-create-rxzc6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.675134 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3e29e22-dcdd-42fc-b7ca-412187993b2c-operator-scripts\") pod \"barbican-81b7-account-create-update-qwfh6\" (UID: \"d3e29e22-dcdd-42fc-b7ca-412187993b2c\") " pod="openstack/barbican-81b7-account-create-update-qwfh6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.691217 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-n6xzn"] Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.692547 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-n6xzn" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.726213 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-n6xzn"] Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.727314 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpf89\" (UniqueName: \"kubernetes.io/projected/d3e29e22-dcdd-42fc-b7ca-412187993b2c-kube-api-access-wpf89\") pod \"barbican-81b7-account-create-update-qwfh6\" (UID: \"d3e29e22-dcdd-42fc-b7ca-412187993b2c\") " pod="openstack/barbican-81b7-account-create-update-qwfh6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.771019 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-52rzl" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.776863 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx55w\" (UniqueName: \"kubernetes.io/projected/97a78092-fac6-45b5-96a8-acfc47ff879e-kube-api-access-zx55w\") pod \"barbican-db-create-rxzc6\" (UID: \"97a78092-fac6-45b5-96a8-acfc47ff879e\") " pod="openstack/barbican-db-create-rxzc6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.776987 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a717aed-76c5-4d65-8b4e-62bc86503f2d-operator-scripts\") pod \"neutron-db-create-n6xzn\" (UID: \"3a717aed-76c5-4d65-8b4e-62bc86503f2d\") " pod="openstack/neutron-db-create-n6xzn" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.778093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97a78092-fac6-45b5-96a8-acfc47ff879e-operator-scripts\") pod \"barbican-db-create-rxzc6\" (UID: \"97a78092-fac6-45b5-96a8-acfc47ff879e\") " pod="openstack/barbican-db-create-rxzc6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.778136 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs8nv\" (UniqueName: \"kubernetes.io/projected/3a717aed-76c5-4d65-8b4e-62bc86503f2d-kube-api-access-fs8nv\") pod \"neutron-db-create-n6xzn\" (UID: \"3a717aed-76c5-4d65-8b4e-62bc86503f2d\") " pod="openstack/neutron-db-create-n6xzn" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.778236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph9nb\" (UniqueName: \"kubernetes.io/projected/97504893-0aed-473e-8297-aef920fc6503-kube-api-access-ph9nb\") pod \"cinder-93d4-account-create-update-9swbr\" (UID: \"97504893-0aed-473e-8297-aef920fc6503\") " pod="openstack/cinder-93d4-account-create-update-9swbr" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.778268 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97504893-0aed-473e-8297-aef920fc6503-operator-scripts\") pod \"cinder-93d4-account-create-update-9swbr\" (UID: \"97504893-0aed-473e-8297-aef920fc6503\") " pod="openstack/cinder-93d4-account-create-update-9swbr" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.779232 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97504893-0aed-473e-8297-aef920fc6503-operator-scripts\") pod \"cinder-93d4-account-create-update-9swbr\" (UID: \"97504893-0aed-473e-8297-aef920fc6503\") " pod="openstack/cinder-93d4-account-create-update-9swbr" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.784323 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97a78092-fac6-45b5-96a8-acfc47ff879e-operator-scripts\") pod \"barbican-db-create-rxzc6\" (UID: \"97a78092-fac6-45b5-96a8-acfc47ff879e\") " pod="openstack/barbican-db-create-rxzc6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.821463 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2581-account-create-update-p9dgq"] Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.915040 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81b7-account-create-update-qwfh6" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.921927 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2581-account-create-update-p9dgq" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.925886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs8nv\" (UniqueName: \"kubernetes.io/projected/3a717aed-76c5-4d65-8b4e-62bc86503f2d-kube-api-access-fs8nv\") pod \"neutron-db-create-n6xzn\" (UID: \"3a717aed-76c5-4d65-8b4e-62bc86503f2d\") " pod="openstack/neutron-db-create-n6xzn" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.926204 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a717aed-76c5-4d65-8b4e-62bc86503f2d-operator-scripts\") pod \"neutron-db-create-n6xzn\" (UID: \"3a717aed-76c5-4d65-8b4e-62bc86503f2d\") " pod="openstack/neutron-db-create-n6xzn" Dec 04 06:28:08 crc kubenswrapper[4832]: I1204 06:28:08.927509 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a717aed-76c5-4d65-8b4e-62bc86503f2d-operator-scripts\") pod \"neutron-db-create-n6xzn\" (UID: \"3a717aed-76c5-4d65-8b4e-62bc86503f2d\") " pod="openstack/neutron-db-create-n6xzn" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.019153 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx55w\" (UniqueName: \"kubernetes.io/projected/97a78092-fac6-45b5-96a8-acfc47ff879e-kube-api-access-zx55w\") pod \"barbican-db-create-rxzc6\" (UID: \"97a78092-fac6-45b5-96a8-acfc47ff879e\") " pod="openstack/barbican-db-create-rxzc6" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.020884 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph9nb\" (UniqueName: \"kubernetes.io/projected/97504893-0aed-473e-8297-aef920fc6503-kube-api-access-ph9nb\") pod \"cinder-93d4-account-create-update-9swbr\" (UID: \"97504893-0aed-473e-8297-aef920fc6503\") " pod="openstack/cinder-93d4-account-create-update-9swbr" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.023698 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.025070 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-93d4-account-create-update-9swbr" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.032092 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2581-account-create-update-p9dgq"] Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.032878 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rxzc6" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.034170 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdkbq\" (UniqueName: \"kubernetes.io/projected/9986d0ac-da6a-44f4-be0b-6d4009c23176-kube-api-access-gdkbq\") pod \"neutron-2581-account-create-update-p9dgq\" (UID: \"9986d0ac-da6a-44f4-be0b-6d4009c23176\") " pod="openstack/neutron-2581-account-create-update-p9dgq" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.034239 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9986d0ac-da6a-44f4-be0b-6d4009c23176-operator-scripts\") pod \"neutron-2581-account-create-update-p9dgq\" (UID: \"9986d0ac-da6a-44f4-be0b-6d4009c23176\") " pod="openstack/neutron-2581-account-create-update-p9dgq" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.039053 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-dm9n4"] Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.041549 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.044518 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.044682 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.045240 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.045356 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dm9n4"] Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.045378 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs8nv\" (UniqueName: \"kubernetes.io/projected/3a717aed-76c5-4d65-8b4e-62bc86503f2d-kube-api-access-fs8nv\") pod \"neutron-db-create-n6xzn\" (UID: \"3a717aed-76c5-4d65-8b4e-62bc86503f2d\") " pod="openstack/neutron-db-create-n6xzn" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.045489 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4k75v" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.136080 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrdrd\" (UniqueName: \"kubernetes.io/projected/e3afc1f7-3354-4d97-a224-c5f886599881-kube-api-access-nrdrd\") pod \"keystone-db-sync-dm9n4\" (UID: \"e3afc1f7-3354-4d97-a224-c5f886599881\") " pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.136137 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdkbq\" (UniqueName: \"kubernetes.io/projected/9986d0ac-da6a-44f4-be0b-6d4009c23176-kube-api-access-gdkbq\") pod \"neutron-2581-account-create-update-p9dgq\" (UID: \"9986d0ac-da6a-44f4-be0b-6d4009c23176\") " pod="openstack/neutron-2581-account-create-update-p9dgq" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.136207 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9986d0ac-da6a-44f4-be0b-6d4009c23176-operator-scripts\") pod \"neutron-2581-account-create-update-p9dgq\" (UID: \"9986d0ac-da6a-44f4-be0b-6d4009c23176\") " pod="openstack/neutron-2581-account-create-update-p9dgq" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.136377 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3afc1f7-3354-4d97-a224-c5f886599881-config-data\") pod \"keystone-db-sync-dm9n4\" (UID: \"e3afc1f7-3354-4d97-a224-c5f886599881\") " pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.136456 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3afc1f7-3354-4d97-a224-c5f886599881-combined-ca-bundle\") pod \"keystone-db-sync-dm9n4\" (UID: \"e3afc1f7-3354-4d97-a224-c5f886599881\") " pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.137254 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9986d0ac-da6a-44f4-be0b-6d4009c23176-operator-scripts\") pod \"neutron-2581-account-create-update-p9dgq\" (UID: \"9986d0ac-da6a-44f4-be0b-6d4009c23176\") " pod="openstack/neutron-2581-account-create-update-p9dgq" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.155931 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdkbq\" (UniqueName: \"kubernetes.io/projected/9986d0ac-da6a-44f4-be0b-6d4009c23176-kube-api-access-gdkbq\") pod \"neutron-2581-account-create-update-p9dgq\" (UID: \"9986d0ac-da6a-44f4-be0b-6d4009c23176\") " pod="openstack/neutron-2581-account-create-update-p9dgq" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.237146 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrdrd\" (UniqueName: \"kubernetes.io/projected/e3afc1f7-3354-4d97-a224-c5f886599881-kube-api-access-nrdrd\") pod \"keystone-db-sync-dm9n4\" (UID: \"e3afc1f7-3354-4d97-a224-c5f886599881\") " pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.237212 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3afc1f7-3354-4d97-a224-c5f886599881-config-data\") pod \"keystone-db-sync-dm9n4\" (UID: \"e3afc1f7-3354-4d97-a224-c5f886599881\") " pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.237288 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3afc1f7-3354-4d97-a224-c5f886599881-combined-ca-bundle\") pod \"keystone-db-sync-dm9n4\" (UID: \"e3afc1f7-3354-4d97-a224-c5f886599881\") " pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.240948 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3afc1f7-3354-4d97-a224-c5f886599881-combined-ca-bundle\") pod \"keystone-db-sync-dm9n4\" (UID: \"e3afc1f7-3354-4d97-a224-c5f886599881\") " pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.241628 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3afc1f7-3354-4d97-a224-c5f886599881-config-data\") pod \"keystone-db-sync-dm9n4\" (UID: \"e3afc1f7-3354-4d97-a224-c5f886599881\") " pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.296857 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrdrd\" (UniqueName: \"kubernetes.io/projected/e3afc1f7-3354-4d97-a224-c5f886599881-kube-api-access-nrdrd\") pod \"keystone-db-sync-dm9n4\" (UID: \"e3afc1f7-3354-4d97-a224-c5f886599881\") " pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.342402 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-n6xzn" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.377871 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2581-account-create-update-p9dgq" Dec 04 06:28:09 crc kubenswrapper[4832]: I1204 06:28:09.385573 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:11 crc kubenswrapper[4832]: E1204 06:28:11.888143 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 04 06:28:11 crc kubenswrapper[4832]: E1204 06:28:11.889445 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsscn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-lckgk_openstack(8f4298e5-b22d-4f71-b682-87539fc2bae7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:28:11 crc kubenswrapper[4832]: E1204 06:28:11.891123 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-lckgk" podUID="8f4298e5-b22d-4f71-b682-87539fc2bae7" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.146775 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.225667 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42953213-a4b9-4260-8f15-dcf18b1897c8-scripts\") pod \"42953213-a4b9-4260-8f15-dcf18b1897c8\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.226022 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42953213-a4b9-4260-8f15-dcf18b1897c8-additional-scripts\") pod \"42953213-a4b9-4260-8f15-dcf18b1897c8\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.226083 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd4zg\" (UniqueName: \"kubernetes.io/projected/42953213-a4b9-4260-8f15-dcf18b1897c8-kube-api-access-hd4zg\") pod \"42953213-a4b9-4260-8f15-dcf18b1897c8\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.226133 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-log-ovn\") pod \"42953213-a4b9-4260-8f15-dcf18b1897c8\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.226237 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-run\") pod \"42953213-a4b9-4260-8f15-dcf18b1897c8\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.226297 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-run-ovn\") pod \"42953213-a4b9-4260-8f15-dcf18b1897c8\" (UID: \"42953213-a4b9-4260-8f15-dcf18b1897c8\") " Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.227494 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42953213-a4b9-4260-8f15-dcf18b1897c8-scripts" (OuterVolumeSpecName: "scripts") pod "42953213-a4b9-4260-8f15-dcf18b1897c8" (UID: "42953213-a4b9-4260-8f15-dcf18b1897c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.228545 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "42953213-a4b9-4260-8f15-dcf18b1897c8" (UID: "42953213-a4b9-4260-8f15-dcf18b1897c8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.229259 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42953213-a4b9-4260-8f15-dcf18b1897c8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "42953213-a4b9-4260-8f15-dcf18b1897c8" (UID: "42953213-a4b9-4260-8f15-dcf18b1897c8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.229291 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-run" (OuterVolumeSpecName: "var-run") pod "42953213-a4b9-4260-8f15-dcf18b1897c8" (UID: "42953213-a4b9-4260-8f15-dcf18b1897c8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.229444 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "42953213-a4b9-4260-8f15-dcf18b1897c8" (UID: "42953213-a4b9-4260-8f15-dcf18b1897c8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.231730 4832 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42953213-a4b9-4260-8f15-dcf18b1897c8-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.231754 4832 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.231775 4832 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-run\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.231787 4832 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42953213-a4b9-4260-8f15-dcf18b1897c8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.231801 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42953213-a4b9-4260-8f15-dcf18b1897c8-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.331915 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kcxl8-config-q8tl6" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.331982 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kcxl8-config-q8tl6" event={"ID":"42953213-a4b9-4260-8f15-dcf18b1897c8","Type":"ContainerDied","Data":"0dadb949cb4a43c8cc1316ad4f80002b04243e32f3495b3c839c24ded0169bc1"} Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.332054 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dadb949cb4a43c8cc1316ad4f80002b04243e32f3495b3c839c24ded0169bc1" Dec 04 06:28:12 crc kubenswrapper[4832]: E1204 06:28:12.334257 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-lckgk" podUID="8f4298e5-b22d-4f71-b682-87539fc2bae7" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.355964 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42953213-a4b9-4260-8f15-dcf18b1897c8-kube-api-access-hd4zg" (OuterVolumeSpecName: "kube-api-access-hd4zg") pod "42953213-a4b9-4260-8f15-dcf18b1897c8" (UID: "42953213-a4b9-4260-8f15-dcf18b1897c8"). InnerVolumeSpecName "kube-api-access-hd4zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.437844 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd4zg\" (UniqueName: \"kubernetes.io/projected/42953213-a4b9-4260-8f15-dcf18b1897c8-kube-api-access-hd4zg\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.694365 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-n6xzn"] Dec 04 06:28:12 crc kubenswrapper[4832]: W1204 06:28:12.699449 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a717aed_76c5_4d65_8b4e_62bc86503f2d.slice/crio-699903d91566268c8a2d071125b53bae3bd7a3725626462f0066b22bb72d18ae WatchSource:0}: Error finding container 699903d91566268c8a2d071125b53bae3bd7a3725626462f0066b22bb72d18ae: Status 404 returned error can't find the container with id 699903d91566268c8a2d071125b53bae3bd7a3725626462f0066b22bb72d18ae Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.726399 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 04 06:28:12 crc kubenswrapper[4832]: W1204 06:28:12.726669 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5889bafa_1999_43e3_846b_234db0db6e83.slice/crio-a5c3a7de16f23c62a4b5a69bf8db9ad4ae16944fc72fb5280ce99c58f1bad9f2 WatchSource:0}: Error finding container a5c3a7de16f23c62a4b5a69bf8db9ad4ae16944fc72fb5280ce99c58f1bad9f2: Status 404 returned error can't find the container with id a5c3a7de16f23c62a4b5a69bf8db9ad4ae16944fc72fb5280ce99c58f1bad9f2 Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.870107 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2581-account-create-update-p9dgq"] Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.879840 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-52rzl"] Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.967931 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-81b7-account-create-update-qwfh6"] Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.978555 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dm9n4"] Dec 04 06:28:12 crc kubenswrapper[4832]: I1204 06:28:12.987087 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-93d4-account-create-update-9swbr"] Dec 04 06:28:13 crc kubenswrapper[4832]: W1204 06:28:13.000345 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3afc1f7_3354_4d97_a224_c5f886599881.slice/crio-c24ab6f1e38d1c4ec8af7af76e476f3b8e9865c9e35ad5832402988a831ad046 WatchSource:0}: Error finding container c24ab6f1e38d1c4ec8af7af76e476f3b8e9865c9e35ad5832402988a831ad046: Status 404 returned error can't find the container with id c24ab6f1e38d1c4ec8af7af76e476f3b8e9865c9e35ad5832402988a831ad046 Dec 04 06:28:13 crc kubenswrapper[4832]: W1204 06:28:13.004307 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3e29e22_dcdd_42fc_b7ca_412187993b2c.slice/crio-ebca6628cf1fe6540cdfbac9ad22a02bf3376247ea5429f4436a5708011a052b WatchSource:0}: Error finding container ebca6628cf1fe6540cdfbac9ad22a02bf3376247ea5429f4436a5708011a052b: Status 404 returned error can't find the container with id ebca6628cf1fe6540cdfbac9ad22a02bf3376247ea5429f4436a5708011a052b Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.139822 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rxzc6"] Dec 04 06:28:13 crc kubenswrapper[4832]: W1204 06:28:13.175307 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97a78092_fac6_45b5_96a8_acfc47ff879e.slice/crio-0ac80b71bf37a24c73102ea57868aaa8324e809151a9fd518a571a9987fce26d WatchSource:0}: Error finding container 0ac80b71bf37a24c73102ea57868aaa8324e809151a9fd518a571a9987fce26d: Status 404 returned error can't find the container with id 0ac80b71bf37a24c73102ea57868aaa8324e809151a9fd518a571a9987fce26d Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.293903 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kcxl8-config-q8tl6"] Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.308527 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kcxl8-config-q8tl6"] Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.341675 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-93d4-account-create-update-9swbr" event={"ID":"97504893-0aed-473e-8297-aef920fc6503","Type":"ContainerStarted","Data":"e825216fcf2adb89f64438764c523bd280d461f86354e3ee6e55ae934c17c236"} Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.343080 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2581-account-create-update-p9dgq" event={"ID":"9986d0ac-da6a-44f4-be0b-6d4009c23176","Type":"ContainerStarted","Data":"cc43b8f3e511c7591ae9cc0bece280bc392c0e42b3a0362db85e4b8b759d589b"} Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.343173 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2581-account-create-update-p9dgq" event={"ID":"9986d0ac-da6a-44f4-be0b-6d4009c23176","Type":"ContainerStarted","Data":"0de5b21f9b2138c927be9bfebe1cb3b7ec7d5e49046a87670f4a50a9560a0dae"} Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.347463 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1d41c5c2-5373-423b-b14f-00c902111ee3","Type":"ContainerStarted","Data":"265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c"} Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.349246 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rxzc6" event={"ID":"97a78092-fac6-45b5-96a8-acfc47ff879e","Type":"ContainerStarted","Data":"0ac80b71bf37a24c73102ea57868aaa8324e809151a9fd518a571a9987fce26d"} Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.349522 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.351085 4832 generic.go:334] "Generic (PLEG): container finished" podID="3a717aed-76c5-4d65-8b4e-62bc86503f2d" containerID="b37d4cf9152a03efbf82c400d4f21b373603535e0c5a1a0097914a404692c017" exitCode=0 Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.351140 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-n6xzn" event={"ID":"3a717aed-76c5-4d65-8b4e-62bc86503f2d","Type":"ContainerDied","Data":"b37d4cf9152a03efbf82c400d4f21b373603535e0c5a1a0097914a404692c017"} Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.351159 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-n6xzn" event={"ID":"3a717aed-76c5-4d65-8b4e-62bc86503f2d","Type":"ContainerStarted","Data":"699903d91566268c8a2d071125b53bae3bd7a3725626462f0066b22bb72d18ae"} Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.352861 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"a5c3a7de16f23c62a4b5a69bf8db9ad4ae16944fc72fb5280ce99c58f1bad9f2"} Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.365789 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-52rzl" event={"ID":"239e8321-436b-4abb-8d3e-9e9dade5f5dd","Type":"ContainerStarted","Data":"b08a0e6876e7886a9912a4368e604444f9d2966b86307ae429c4825d0ff33d7c"} Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.365837 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-52rzl" event={"ID":"239e8321-436b-4abb-8d3e-9e9dade5f5dd","Type":"ContainerStarted","Data":"a45a7dcaee71d07493ce57b1310ed0ac0e8bbf2660ca5135f710e8b8db6195c8"} Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.368683 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-2581-account-create-update-p9dgq" podStartSLOduration=5.368663912 podStartE2EDuration="5.368663912s" podCreationTimestamp="2025-12-04 06:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:28:13.361191205 +0000 UTC m=+1148.974008911" watchObservedRunningTime="2025-12-04 06:28:13.368663912 +0000 UTC m=+1148.981481618" Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.368784 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81b7-account-create-update-qwfh6" event={"ID":"d3e29e22-dcdd-42fc-b7ca-412187993b2c","Type":"ContainerStarted","Data":"ebca6628cf1fe6540cdfbac9ad22a02bf3376247ea5429f4436a5708011a052b"} Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.372991 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dm9n4" event={"ID":"e3afc1f7-3354-4d97-a224-c5f886599881","Type":"ContainerStarted","Data":"c24ab6f1e38d1c4ec8af7af76e476f3b8e9865c9e35ad5832402988a831ad046"} Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.425916 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371949.428888 podStartE2EDuration="1m27.425887496s" podCreationTimestamp="2025-12-04 06:26:46 +0000 UTC" firstStartedPulling="2025-12-04 06:26:49.22528408 +0000 UTC m=+1064.838101786" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:28:13.411601009 +0000 UTC m=+1149.024418725" watchObservedRunningTime="2025-12-04 06:28:13.425887496 +0000 UTC m=+1149.038705202" Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.461329 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-52rzl" podStartSLOduration=5.461307304 podStartE2EDuration="5.461307304s" podCreationTimestamp="2025-12-04 06:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:28:13.431496847 +0000 UTC m=+1149.044314553" watchObservedRunningTime="2025-12-04 06:28:13.461307304 +0000 UTC m=+1149.074125020" Dec 04 06:28:13 crc kubenswrapper[4832]: I1204 06:28:13.470009 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-81b7-account-create-update-qwfh6" podStartSLOduration=5.4699863220000005 podStartE2EDuration="5.469986322s" podCreationTimestamp="2025-12-04 06:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:28:13.459737995 +0000 UTC m=+1149.072555701" watchObservedRunningTime="2025-12-04 06:28:13.469986322 +0000 UTC m=+1149.082804028" Dec 04 06:28:14 crc kubenswrapper[4832]: I1204 06:28:14.387245 4832 generic.go:334] "Generic (PLEG): container finished" podID="97504893-0aed-473e-8297-aef920fc6503" containerID="fe34b2d02b1a7cf187f58366e6eda53e755ec4a0a1f5c8f2f1bbc033b643f1d3" exitCode=0 Dec 04 06:28:14 crc kubenswrapper[4832]: I1204 06:28:14.387335 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-93d4-account-create-update-9swbr" event={"ID":"97504893-0aed-473e-8297-aef920fc6503","Type":"ContainerDied","Data":"fe34b2d02b1a7cf187f58366e6eda53e755ec4a0a1f5c8f2f1bbc033b643f1d3"} Dec 04 06:28:14 crc kubenswrapper[4832]: I1204 06:28:14.389581 4832 generic.go:334] "Generic (PLEG): container finished" podID="9986d0ac-da6a-44f4-be0b-6d4009c23176" containerID="cc43b8f3e511c7591ae9cc0bece280bc392c0e42b3a0362db85e4b8b759d589b" exitCode=0 Dec 04 06:28:14 crc kubenswrapper[4832]: I1204 06:28:14.389659 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2581-account-create-update-p9dgq" event={"ID":"9986d0ac-da6a-44f4-be0b-6d4009c23176","Type":"ContainerDied","Data":"cc43b8f3e511c7591ae9cc0bece280bc392c0e42b3a0362db85e4b8b759d589b"} Dec 04 06:28:14 crc kubenswrapper[4832]: I1204 06:28:14.392857 4832 generic.go:334] "Generic (PLEG): container finished" podID="97a78092-fac6-45b5-96a8-acfc47ff879e" containerID="f92b73253a8f18d477eebd3d46c19a0e1a150d7dbc682b1b018274b8339aa7ba" exitCode=0 Dec 04 06:28:14 crc kubenswrapper[4832]: I1204 06:28:14.392942 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rxzc6" event={"ID":"97a78092-fac6-45b5-96a8-acfc47ff879e","Type":"ContainerDied","Data":"f92b73253a8f18d477eebd3d46c19a0e1a150d7dbc682b1b018274b8339aa7ba"} Dec 04 06:28:14 crc kubenswrapper[4832]: I1204 06:28:14.394516 4832 generic.go:334] "Generic (PLEG): container finished" podID="239e8321-436b-4abb-8d3e-9e9dade5f5dd" containerID="b08a0e6876e7886a9912a4368e604444f9d2966b86307ae429c4825d0ff33d7c" exitCode=0 Dec 04 06:28:14 crc kubenswrapper[4832]: I1204 06:28:14.394564 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-52rzl" event={"ID":"239e8321-436b-4abb-8d3e-9e9dade5f5dd","Type":"ContainerDied","Data":"b08a0e6876e7886a9912a4368e604444f9d2966b86307ae429c4825d0ff33d7c"} Dec 04 06:28:14 crc kubenswrapper[4832]: I1204 06:28:14.396705 4832 generic.go:334] "Generic (PLEG): container finished" podID="d3e29e22-dcdd-42fc-b7ca-412187993b2c" containerID="a819f838b63512bdad292c913abcc80ee624d6656a75afda6a6136edcf0a2f54" exitCode=0 Dec 04 06:28:14 crc kubenswrapper[4832]: I1204 06:28:14.396753 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81b7-account-create-update-qwfh6" event={"ID":"d3e29e22-dcdd-42fc-b7ca-412187993b2c","Type":"ContainerDied","Data":"a819f838b63512bdad292c913abcc80ee624d6656a75afda6a6136edcf0a2f54"} Dec 04 06:28:14 crc kubenswrapper[4832]: I1204 06:28:14.726477 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42953213-a4b9-4260-8f15-dcf18b1897c8" path="/var/lib/kubelet/pods/42953213-a4b9-4260-8f15-dcf18b1897c8/volumes" Dec 04 06:28:14 crc kubenswrapper[4832]: I1204 06:28:14.905703 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-n6xzn" Dec 04 06:28:15 crc kubenswrapper[4832]: I1204 06:28:15.033882 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs8nv\" (UniqueName: \"kubernetes.io/projected/3a717aed-76c5-4d65-8b4e-62bc86503f2d-kube-api-access-fs8nv\") pod \"3a717aed-76c5-4d65-8b4e-62bc86503f2d\" (UID: \"3a717aed-76c5-4d65-8b4e-62bc86503f2d\") " Dec 04 06:28:15 crc kubenswrapper[4832]: I1204 06:28:15.033963 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a717aed-76c5-4d65-8b4e-62bc86503f2d-operator-scripts\") pod \"3a717aed-76c5-4d65-8b4e-62bc86503f2d\" (UID: \"3a717aed-76c5-4d65-8b4e-62bc86503f2d\") " Dec 04 06:28:15 crc kubenswrapper[4832]: I1204 06:28:15.034881 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a717aed-76c5-4d65-8b4e-62bc86503f2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a717aed-76c5-4d65-8b4e-62bc86503f2d" (UID: "3a717aed-76c5-4d65-8b4e-62bc86503f2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:15 crc kubenswrapper[4832]: I1204 06:28:15.040862 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a717aed-76c5-4d65-8b4e-62bc86503f2d-kube-api-access-fs8nv" (OuterVolumeSpecName: "kube-api-access-fs8nv") pod "3a717aed-76c5-4d65-8b4e-62bc86503f2d" (UID: "3a717aed-76c5-4d65-8b4e-62bc86503f2d"). InnerVolumeSpecName "kube-api-access-fs8nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:15 crc kubenswrapper[4832]: I1204 06:28:15.136694 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs8nv\" (UniqueName: \"kubernetes.io/projected/3a717aed-76c5-4d65-8b4e-62bc86503f2d-kube-api-access-fs8nv\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:15 crc kubenswrapper[4832]: I1204 06:28:15.136783 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a717aed-76c5-4d65-8b4e-62bc86503f2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:15 crc kubenswrapper[4832]: I1204 06:28:15.416177 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-n6xzn" Dec 04 06:28:15 crc kubenswrapper[4832]: I1204 06:28:15.416179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-n6xzn" event={"ID":"3a717aed-76c5-4d65-8b4e-62bc86503f2d","Type":"ContainerDied","Data":"699903d91566268c8a2d071125b53bae3bd7a3725626462f0066b22bb72d18ae"} Dec 04 06:28:15 crc kubenswrapper[4832]: I1204 06:28:15.416259 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="699903d91566268c8a2d071125b53bae3bd7a3725626462f0066b22bb72d18ae" Dec 04 06:28:15 crc kubenswrapper[4832]: I1204 06:28:15.422989 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"f208d601b8fb61de8a1280b6acbbfa0ab5ecf10c9dc7262b0e9fadc322794a13"} Dec 04 06:28:15 crc kubenswrapper[4832]: I1204 06:28:15.423066 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"57bef91f9931a036ec3b1553f038bbcf0ff20a0944643b61fe7f7494994f250d"} Dec 04 06:28:15 crc kubenswrapper[4832]: I1204 06:28:15.423081 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"ab848a65d947350d311d2007b607d8ca34ddf2e0a2859ca9f39542262fcc4647"} Dec 04 06:28:16 crc kubenswrapper[4832]: I1204 06:28:16.467106 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"4e730db1de1313f17a135b303b2e7bfe27ab5a997247aaad765c2c9a31cede6e"} Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.543703 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-52rzl" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.553966 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2581-account-create-update-p9dgq" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.607814 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81b7-account-create-update-qwfh6" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.639985 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-93d4-account-create-update-9swbr" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.649069 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rxzc6" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.659571 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w67vc\" (UniqueName: \"kubernetes.io/projected/239e8321-436b-4abb-8d3e-9e9dade5f5dd-kube-api-access-w67vc\") pod \"239e8321-436b-4abb-8d3e-9e9dade5f5dd\" (UID: \"239e8321-436b-4abb-8d3e-9e9dade5f5dd\") " Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.659775 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239e8321-436b-4abb-8d3e-9e9dade5f5dd-operator-scripts\") pod \"239e8321-436b-4abb-8d3e-9e9dade5f5dd\" (UID: \"239e8321-436b-4abb-8d3e-9e9dade5f5dd\") " Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.659897 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdkbq\" (UniqueName: \"kubernetes.io/projected/9986d0ac-da6a-44f4-be0b-6d4009c23176-kube-api-access-gdkbq\") pod \"9986d0ac-da6a-44f4-be0b-6d4009c23176\" (UID: \"9986d0ac-da6a-44f4-be0b-6d4009c23176\") " Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.659937 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9986d0ac-da6a-44f4-be0b-6d4009c23176-operator-scripts\") pod \"9986d0ac-da6a-44f4-be0b-6d4009c23176\" (UID: \"9986d0ac-da6a-44f4-be0b-6d4009c23176\") " Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.660954 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239e8321-436b-4abb-8d3e-9e9dade5f5dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "239e8321-436b-4abb-8d3e-9e9dade5f5dd" (UID: "239e8321-436b-4abb-8d3e-9e9dade5f5dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.661262 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9986d0ac-da6a-44f4-be0b-6d4009c23176-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9986d0ac-da6a-44f4-be0b-6d4009c23176" (UID: "9986d0ac-da6a-44f4-be0b-6d4009c23176"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.661595 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9986d0ac-da6a-44f4-be0b-6d4009c23176-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.661691 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239e8321-436b-4abb-8d3e-9e9dade5f5dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.667412 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9986d0ac-da6a-44f4-be0b-6d4009c23176-kube-api-access-gdkbq" (OuterVolumeSpecName: "kube-api-access-gdkbq") pod "9986d0ac-da6a-44f4-be0b-6d4009c23176" (UID: "9986d0ac-da6a-44f4-be0b-6d4009c23176"). InnerVolumeSpecName "kube-api-access-gdkbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.683643 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/239e8321-436b-4abb-8d3e-9e9dade5f5dd-kube-api-access-w67vc" (OuterVolumeSpecName: "kube-api-access-w67vc") pod "239e8321-436b-4abb-8d3e-9e9dade5f5dd" (UID: "239e8321-436b-4abb-8d3e-9e9dade5f5dd"). InnerVolumeSpecName "kube-api-access-w67vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.741540 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rxzc6" event={"ID":"97a78092-fac6-45b5-96a8-acfc47ff879e","Type":"ContainerDied","Data":"0ac80b71bf37a24c73102ea57868aaa8324e809151a9fd518a571a9987fce26d"} Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.741788 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ac80b71bf37a24c73102ea57868aaa8324e809151a9fd518a571a9987fce26d" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.741729 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rxzc6" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.743833 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81b7-account-create-update-qwfh6" event={"ID":"d3e29e22-dcdd-42fc-b7ca-412187993b2c","Type":"ContainerDied","Data":"ebca6628cf1fe6540cdfbac9ad22a02bf3376247ea5429f4436a5708011a052b"} Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.743945 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebca6628cf1fe6540cdfbac9ad22a02bf3376247ea5429f4436a5708011a052b" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.743854 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81b7-account-create-update-qwfh6" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.745310 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-52rzl" event={"ID":"239e8321-436b-4abb-8d3e-9e9dade5f5dd","Type":"ContainerDied","Data":"a45a7dcaee71d07493ce57b1310ed0ac0e8bbf2660ca5135f710e8b8db6195c8"} Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.745331 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a45a7dcaee71d07493ce57b1310ed0ac0e8bbf2660ca5135f710e8b8db6195c8" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.745347 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-52rzl" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.747454 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dm9n4" event={"ID":"e3afc1f7-3354-4d97-a224-c5f886599881","Type":"ContainerStarted","Data":"0f09073e99532314b0fe1bf6d8545ae1566d4e988b643d96fdcc7c13ccf942b0"} Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.749441 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-93d4-account-create-update-9swbr" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.749461 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-93d4-account-create-update-9swbr" event={"ID":"97504893-0aed-473e-8297-aef920fc6503","Type":"ContainerDied","Data":"e825216fcf2adb89f64438764c523bd280d461f86354e3ee6e55ae934c17c236"} Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.749511 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e825216fcf2adb89f64438764c523bd280d461f86354e3ee6e55ae934c17c236" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.756857 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2581-account-create-update-p9dgq" event={"ID":"9986d0ac-da6a-44f4-be0b-6d4009c23176","Type":"ContainerDied","Data":"0de5b21f9b2138c927be9bfebe1cb3b7ec7d5e49046a87670f4a50a9560a0dae"} Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.756901 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2581-account-create-update-p9dgq" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.756912 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0de5b21f9b2138c927be9bfebe1cb3b7ec7d5e49046a87670f4a50a9560a0dae" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.762931 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97a78092-fac6-45b5-96a8-acfc47ff879e-operator-scripts\") pod \"97a78092-fac6-45b5-96a8-acfc47ff879e\" (UID: \"97a78092-fac6-45b5-96a8-acfc47ff879e\") " Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.764921 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a78092-fac6-45b5-96a8-acfc47ff879e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97a78092-fac6-45b5-96a8-acfc47ff879e" (UID: "97a78092-fac6-45b5-96a8-acfc47ff879e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.768317 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-dm9n4" podStartSLOduration=5.3338383799999995 podStartE2EDuration="12.768302555s" podCreationTimestamp="2025-12-04 06:28:08 +0000 UTC" firstStartedPulling="2025-12-04 06:28:13.016909098 +0000 UTC m=+1148.629726804" lastFinishedPulling="2025-12-04 06:28:20.451373273 +0000 UTC m=+1156.064190979" observedRunningTime="2025-12-04 06:28:20.76608636 +0000 UTC m=+1156.378904066" watchObservedRunningTime="2025-12-04 06:28:20.768302555 +0000 UTC m=+1156.381120261" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.770904 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx55w\" (UniqueName: \"kubernetes.io/projected/97a78092-fac6-45b5-96a8-acfc47ff879e-kube-api-access-zx55w\") pod \"97a78092-fac6-45b5-96a8-acfc47ff879e\" (UID: \"97a78092-fac6-45b5-96a8-acfc47ff879e\") " Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.770963 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97504893-0aed-473e-8297-aef920fc6503-operator-scripts\") pod \"97504893-0aed-473e-8297-aef920fc6503\" (UID: \"97504893-0aed-473e-8297-aef920fc6503\") " Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.770997 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpf89\" (UniqueName: \"kubernetes.io/projected/d3e29e22-dcdd-42fc-b7ca-412187993b2c-kube-api-access-wpf89\") pod \"d3e29e22-dcdd-42fc-b7ca-412187993b2c\" (UID: \"d3e29e22-dcdd-42fc-b7ca-412187993b2c\") " Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.771133 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph9nb\" (UniqueName: \"kubernetes.io/projected/97504893-0aed-473e-8297-aef920fc6503-kube-api-access-ph9nb\") pod \"97504893-0aed-473e-8297-aef920fc6503\" (UID: \"97504893-0aed-473e-8297-aef920fc6503\") " Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.771202 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3e29e22-dcdd-42fc-b7ca-412187993b2c-operator-scripts\") pod \"d3e29e22-dcdd-42fc-b7ca-412187993b2c\" (UID: \"d3e29e22-dcdd-42fc-b7ca-412187993b2c\") " Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.771904 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97504893-0aed-473e-8297-aef920fc6503-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97504893-0aed-473e-8297-aef920fc6503" (UID: "97504893-0aed-473e-8297-aef920fc6503"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.772706 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e29e22-dcdd-42fc-b7ca-412187993b2c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3e29e22-dcdd-42fc-b7ca-412187993b2c" (UID: "d3e29e22-dcdd-42fc-b7ca-412187993b2c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.772974 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdkbq\" (UniqueName: \"kubernetes.io/projected/9986d0ac-da6a-44f4-be0b-6d4009c23176-kube-api-access-gdkbq\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.773062 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97a78092-fac6-45b5-96a8-acfc47ff879e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.773107 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w67vc\" (UniqueName: \"kubernetes.io/projected/239e8321-436b-4abb-8d3e-9e9dade5f5dd-kube-api-access-w67vc\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.775867 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e29e22-dcdd-42fc-b7ca-412187993b2c-kube-api-access-wpf89" (OuterVolumeSpecName: "kube-api-access-wpf89") pod "d3e29e22-dcdd-42fc-b7ca-412187993b2c" (UID: "d3e29e22-dcdd-42fc-b7ca-412187993b2c"). InnerVolumeSpecName "kube-api-access-wpf89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.776257 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a78092-fac6-45b5-96a8-acfc47ff879e-kube-api-access-zx55w" (OuterVolumeSpecName: "kube-api-access-zx55w") pod "97a78092-fac6-45b5-96a8-acfc47ff879e" (UID: "97a78092-fac6-45b5-96a8-acfc47ff879e"). InnerVolumeSpecName "kube-api-access-zx55w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.777932 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97504893-0aed-473e-8297-aef920fc6503-kube-api-access-ph9nb" (OuterVolumeSpecName: "kube-api-access-ph9nb") pod "97504893-0aed-473e-8297-aef920fc6503" (UID: "97504893-0aed-473e-8297-aef920fc6503"). InnerVolumeSpecName "kube-api-access-ph9nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.877349 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx55w\" (UniqueName: \"kubernetes.io/projected/97a78092-fac6-45b5-96a8-acfc47ff879e-kube-api-access-zx55w\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.877381 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97504893-0aed-473e-8297-aef920fc6503-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.877414 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpf89\" (UniqueName: \"kubernetes.io/projected/d3e29e22-dcdd-42fc-b7ca-412187993b2c-kube-api-access-wpf89\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.877426 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph9nb\" (UniqueName: \"kubernetes.io/projected/97504893-0aed-473e-8297-aef920fc6503-kube-api-access-ph9nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:20 crc kubenswrapper[4832]: I1204 06:28:20.877439 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3e29e22-dcdd-42fc-b7ca-412187993b2c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:21 crc kubenswrapper[4832]: I1204 06:28:21.771582 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"943305e2b77a0ac1b20197b815ffdaf5f69e37d7bda9cac0fdea72f28d88aae1"} Dec 04 06:28:21 crc kubenswrapper[4832]: I1204 06:28:21.771962 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"676b55010389e88cdc98c5ca3304da457c82b2c2e2c73f163fd5e0d177d37594"} Dec 04 06:28:21 crc kubenswrapper[4832]: I1204 06:28:21.771977 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"bcb43fe2d7182d59566119ddadfd3087725fe4d7d4eb24b07b08ca9fc0ac0193"} Dec 04 06:28:22 crc kubenswrapper[4832]: I1204 06:28:22.782203 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"1bc355c311d1aa9e4909f0dc314c5ce7cf59cf3dc03464d9f0a7bdf4b6aecddd"} Dec 04 06:28:23 crc kubenswrapper[4832]: I1204 06:28:23.810999 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"5b22945080865a91a5342fcebd88025d73d0acfcaf3d9b5cbba1707e99ae4545"} Dec 04 06:28:24 crc kubenswrapper[4832]: I1204 06:28:24.823976 4832 generic.go:334] "Generic (PLEG): container finished" podID="e3afc1f7-3354-4d97-a224-c5f886599881" containerID="0f09073e99532314b0fe1bf6d8545ae1566d4e988b643d96fdcc7c13ccf942b0" exitCode=0 Dec 04 06:28:24 crc kubenswrapper[4832]: I1204 06:28:24.824114 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dm9n4" event={"ID":"e3afc1f7-3354-4d97-a224-c5f886599881","Type":"ContainerDied","Data":"0f09073e99532314b0fe1bf6d8545ae1566d4e988b643d96fdcc7c13ccf942b0"} Dec 04 06:28:24 crc kubenswrapper[4832]: I1204 06:28:24.829250 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lckgk" event={"ID":"8f4298e5-b22d-4f71-b682-87539fc2bae7","Type":"ContainerStarted","Data":"1ec08c6c9c2e2e5c28d7a348235144809cb25febbfc0c813e4b9901c2fae084c"} Dec 04 06:28:24 crc kubenswrapper[4832]: I1204 06:28:24.839574 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"2348c27ce50a135893c7315b1371c6dd30941424d4b693240f6d8d6471edbede"} Dec 04 06:28:24 crc kubenswrapper[4832]: I1204 06:28:24.839628 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"cea2752bb8a01ddc950b2e33cf532ed0ad1e9f6695a205dac0dca6fea1578184"} Dec 04 06:28:24 crc kubenswrapper[4832]: I1204 06:28:24.839643 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"9829a8fe53874a2a01b524749926238c63be4a0612197345320bc7479ed3c580"} Dec 04 06:28:24 crc kubenswrapper[4832]: I1204 06:28:24.839656 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"eba7f813f51f10830340a6131663c6459e19e695bd23201416d1f28ec2a3f7c2"} Dec 04 06:28:24 crc kubenswrapper[4832]: I1204 06:28:24.883514 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-lckgk" podStartSLOduration=3.091641087 podStartE2EDuration="34.883486465s" podCreationTimestamp="2025-12-04 06:27:50 +0000 UTC" firstStartedPulling="2025-12-04 06:27:51.731221997 +0000 UTC m=+1127.344039703" lastFinishedPulling="2025-12-04 06:28:23.523067375 +0000 UTC m=+1159.135885081" observedRunningTime="2025-12-04 06:28:24.867787341 +0000 UTC m=+1160.480605067" watchObservedRunningTime="2025-12-04 06:28:24.883486465 +0000 UTC m=+1160.496304171" Dec 04 06:28:25 crc kubenswrapper[4832]: I1204 06:28:25.866484 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"349ed9eb061d5cf822a485dd8c5c6378b02cb2a794802739c4144d83f3f9b563"} Dec 04 06:28:25 crc kubenswrapper[4832]: I1204 06:28:25.868092 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5889bafa-1999-43e3-846b-234db0db6e83","Type":"ContainerStarted","Data":"3651fd6a2252210b7584b8e5b46529f58088a48fb86951dd18eff3ca3183bc25"} Dec 04 06:28:25 crc kubenswrapper[4832]: I1204 06:28:25.934171 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.140006119 podStartE2EDuration="53.934152633s" podCreationTimestamp="2025-12-04 06:27:32 +0000 UTC" firstStartedPulling="2025-12-04 06:28:12.728944762 +0000 UTC m=+1148.341762468" lastFinishedPulling="2025-12-04 06:28:23.523091276 +0000 UTC m=+1159.135908982" observedRunningTime="2025-12-04 06:28:25.934046501 +0000 UTC m=+1161.546864207" watchObservedRunningTime="2025-12-04 06:28:25.934152633 +0000 UTC m=+1161.546970339" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.263411 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.334296 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-b9rf7"] Dec 04 06:28:26 crc kubenswrapper[4832]: E1204 06:28:26.340682 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a78092-fac6-45b5-96a8-acfc47ff879e" containerName="mariadb-database-create" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.340725 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a78092-fac6-45b5-96a8-acfc47ff879e" containerName="mariadb-database-create" Dec 04 06:28:26 crc kubenswrapper[4832]: E1204 06:28:26.340749 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239e8321-436b-4abb-8d3e-9e9dade5f5dd" containerName="mariadb-database-create" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.340758 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="239e8321-436b-4abb-8d3e-9e9dade5f5dd" containerName="mariadb-database-create" Dec 04 06:28:26 crc kubenswrapper[4832]: E1204 06:28:26.340778 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a717aed-76c5-4d65-8b4e-62bc86503f2d" containerName="mariadb-database-create" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.340786 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a717aed-76c5-4d65-8b4e-62bc86503f2d" containerName="mariadb-database-create" Dec 04 06:28:26 crc kubenswrapper[4832]: E1204 06:28:26.340806 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e29e22-dcdd-42fc-b7ca-412187993b2c" containerName="mariadb-account-create-update" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.340813 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e29e22-dcdd-42fc-b7ca-412187993b2c" containerName="mariadb-account-create-update" Dec 04 06:28:26 crc kubenswrapper[4832]: E1204 06:28:26.340830 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3afc1f7-3354-4d97-a224-c5f886599881" containerName="keystone-db-sync" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.340842 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3afc1f7-3354-4d97-a224-c5f886599881" containerName="keystone-db-sync" Dec 04 06:28:26 crc kubenswrapper[4832]: E1204 06:28:26.340866 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9986d0ac-da6a-44f4-be0b-6d4009c23176" containerName="mariadb-account-create-update" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.340884 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9986d0ac-da6a-44f4-be0b-6d4009c23176" containerName="mariadb-account-create-update" Dec 04 06:28:26 crc kubenswrapper[4832]: E1204 06:28:26.340911 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97504893-0aed-473e-8297-aef920fc6503" containerName="mariadb-account-create-update" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.340927 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="97504893-0aed-473e-8297-aef920fc6503" containerName="mariadb-account-create-update" Dec 04 06:28:26 crc kubenswrapper[4832]: E1204 06:28:26.340948 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42953213-a4b9-4260-8f15-dcf18b1897c8" containerName="ovn-config" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.340959 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="42953213-a4b9-4260-8f15-dcf18b1897c8" containerName="ovn-config" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.341344 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3afc1f7-3354-4d97-a224-c5f886599881" containerName="keystone-db-sync" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.341380 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9986d0ac-da6a-44f4-be0b-6d4009c23176" containerName="mariadb-account-create-update" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.341414 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a78092-fac6-45b5-96a8-acfc47ff879e" containerName="mariadb-database-create" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.341432 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="42953213-a4b9-4260-8f15-dcf18b1897c8" containerName="ovn-config" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.341446 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e29e22-dcdd-42fc-b7ca-412187993b2c" containerName="mariadb-account-create-update" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.341470 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a717aed-76c5-4d65-8b4e-62bc86503f2d" containerName="mariadb-database-create" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.341491 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="239e8321-436b-4abb-8d3e-9e9dade5f5dd" containerName="mariadb-database-create" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.341507 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="97504893-0aed-473e-8297-aef920fc6503" containerName="mariadb-account-create-update" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.346853 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.349692 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-b9rf7"] Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.350209 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.426606 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3afc1f7-3354-4d97-a224-c5f886599881-combined-ca-bundle\") pod \"e3afc1f7-3354-4d97-a224-c5f886599881\" (UID: \"e3afc1f7-3354-4d97-a224-c5f886599881\") " Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.426671 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrdrd\" (UniqueName: \"kubernetes.io/projected/e3afc1f7-3354-4d97-a224-c5f886599881-kube-api-access-nrdrd\") pod \"e3afc1f7-3354-4d97-a224-c5f886599881\" (UID: \"e3afc1f7-3354-4d97-a224-c5f886599881\") " Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.426804 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3afc1f7-3354-4d97-a224-c5f886599881-config-data\") pod \"e3afc1f7-3354-4d97-a224-c5f886599881\" (UID: \"e3afc1f7-3354-4d97-a224-c5f886599881\") " Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.427148 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.427245 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-dns-svc\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.427273 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.427299 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.427316 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7vnd\" (UniqueName: \"kubernetes.io/projected/04d94cb3-7cc0-4f1e-96df-2b175f063923-kube-api-access-v7vnd\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.427337 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-config\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.440722 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3afc1f7-3354-4d97-a224-c5f886599881-kube-api-access-nrdrd" (OuterVolumeSpecName: "kube-api-access-nrdrd") pod "e3afc1f7-3354-4d97-a224-c5f886599881" (UID: "e3afc1f7-3354-4d97-a224-c5f886599881"). InnerVolumeSpecName "kube-api-access-nrdrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.459958 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3afc1f7-3354-4d97-a224-c5f886599881-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3afc1f7-3354-4d97-a224-c5f886599881" (UID: "e3afc1f7-3354-4d97-a224-c5f886599881"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.484843 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3afc1f7-3354-4d97-a224-c5f886599881-config-data" (OuterVolumeSpecName: "config-data") pod "e3afc1f7-3354-4d97-a224-c5f886599881" (UID: "e3afc1f7-3354-4d97-a224-c5f886599881"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.528909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-dns-svc\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.528970 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.529009 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.529037 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vnd\" (UniqueName: \"kubernetes.io/projected/04d94cb3-7cc0-4f1e-96df-2b175f063923-kube-api-access-v7vnd\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.529064 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-config\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.529144 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.529372 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3afc1f7-3354-4d97-a224-c5f886599881-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.529407 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3afc1f7-3354-4d97-a224-c5f886599881-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.529424 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrdrd\" (UniqueName: \"kubernetes.io/projected/e3afc1f7-3354-4d97-a224-c5f886599881-kube-api-access-nrdrd\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.530456 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.530545 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.530781 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-config\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.531002 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-dns-svc\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.531245 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.549251 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7vnd\" (UniqueName: \"kubernetes.io/projected/04d94cb3-7cc0-4f1e-96df-2b175f063923-kube-api-access-v7vnd\") pod \"dnsmasq-dns-764c5664d7-b9rf7\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.666204 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.884455 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dm9n4" Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.884982 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dm9n4" event={"ID":"e3afc1f7-3354-4d97-a224-c5f886599881","Type":"ContainerDied","Data":"c24ab6f1e38d1c4ec8af7af76e476f3b8e9865c9e35ad5832402988a831ad046"} Dec 04 06:28:26 crc kubenswrapper[4832]: I1204 06:28:26.885042 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c24ab6f1e38d1c4ec8af7af76e476f3b8e9865c9e35ad5832402988a831ad046" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.205880 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l28b8"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.207298 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.212882 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.213148 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.213331 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.213446 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4k75v" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.225972 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.249072 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqc9\" (UniqueName: \"kubernetes.io/projected/380ef86a-fae8-4946-857a-8fc69f555304-kube-api-access-vkqc9\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.249610 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-fernet-keys\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.249685 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-credential-keys\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.249710 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-scripts\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.249733 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-combined-ca-bundle\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.249928 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-config-data\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.256921 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-b9rf7"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.268781 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-b9rf7"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.324587 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l28b8"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.390665 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-config-data\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.390797 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqc9\" (UniqueName: \"kubernetes.io/projected/380ef86a-fae8-4946-857a-8fc69f555304-kube-api-access-vkqc9\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.390841 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-fernet-keys\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.390898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-credential-keys\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.390926 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-scripts\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.390951 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-combined-ca-bundle\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.420591 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-fernet-keys\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.422867 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-combined-ca-bundle\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.435902 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-credential-keys\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.447146 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-config-data\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.447241 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-b5nzc"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.449156 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.498100 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-scripts\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.527003 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqc9\" (UniqueName: \"kubernetes.io/projected/380ef86a-fae8-4946-857a-8fc69f555304-kube-api-access-vkqc9\") pod \"keystone-bootstrap-l28b8\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.529759 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.590144 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-b5nzc"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.604989 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.605085 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-dns-svc\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.605139 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.605177 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72rrp\" (UniqueName: \"kubernetes.io/projected/ad67203c-822c-4d1c-89c7-dc7550446a85-kube-api-access-72rrp\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.605231 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-config\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.605336 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.643440 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-mxwh7"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.645014 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.674451 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.674500 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j4csp" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.675139 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.684016 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mxwh7"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.698802 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5d49f8894f-6hsxv"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.700516 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.710491 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.710846 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-bbzlz" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.711062 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.738594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.739710 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhd7h\" (UniqueName: \"kubernetes.io/projected/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-kube-api-access-fhd7h\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.739843 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-config-data\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.740966 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-db-sync-config-data\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.741127 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.741327 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b2da44-0766-4710-9351-b550f260667e-logs\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.741437 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02b2da44-0766-4710-9351-b550f260667e-horizon-secret-key\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.741552 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-dns-svc\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.741653 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02b2da44-0766-4710-9351-b550f260667e-scripts\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.741727 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.741818 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv55f\" (UniqueName: \"kubernetes.io/projected/02b2da44-0766-4710-9351-b550f260667e-kube-api-access-rv55f\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.741915 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-etc-machine-id\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.742000 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72rrp\" (UniqueName: \"kubernetes.io/projected/ad67203c-822c-4d1c-89c7-dc7550446a85-kube-api-access-72rrp\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.742104 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02b2da44-0766-4710-9351-b550f260667e-config-data\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.742213 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-config\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.742101 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.743452 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-combined-ca-bundle\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.743582 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-config\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.743696 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-scripts\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.743718 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-dns-svc\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.743828 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.744496 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.745028 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.786506 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d49f8894f-6hsxv"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.814754 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72rrp\" (UniqueName: \"kubernetes.io/projected/ad67203c-822c-4d1c-89c7-dc7550446a85-kube-api-access-72rrp\") pod \"dnsmasq-dns-5959f8865f-b5nzc\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.827139 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.834341 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.842799 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.843059 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.846690 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-scripts\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.846918 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhd7h\" (UniqueName: \"kubernetes.io/projected/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-kube-api-access-fhd7h\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.847017 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-config-data\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.847100 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-db-sync-config-data\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.856906 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-config-data\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.858144 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b2da44-0766-4710-9351-b550f260667e-logs\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.858688 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02b2da44-0766-4710-9351-b550f260667e-horizon-secret-key\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.858833 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02b2da44-0766-4710-9351-b550f260667e-scripts\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.858985 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv55f\" (UniqueName: \"kubernetes.io/projected/02b2da44-0766-4710-9351-b550f260667e-kube-api-access-rv55f\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.859606 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-etc-machine-id\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.859748 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02b2da44-0766-4710-9351-b550f260667e-config-data\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.860147 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-combined-ca-bundle\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.860366 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b2da44-0766-4710-9351-b550f260667e-logs\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.862743 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-etc-machine-id\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.864282 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02b2da44-0766-4710-9351-b550f260667e-scripts\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.870113 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02b2da44-0766-4710-9351-b550f260667e-config-data\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.870721 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.871801 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-combined-ca-bundle\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.875254 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02b2da44-0766-4710-9351-b550f260667e-horizon-secret-key\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.891899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-db-sync-config-data\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.895978 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-scripts\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.897816 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhd7h\" (UniqueName: \"kubernetes.io/projected/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-kube-api-access-fhd7h\") pod \"cinder-db-sync-mxwh7\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.909431 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jggjz"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.910929 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jggjz" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.911968 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.914200 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv55f\" (UniqueName: \"kubernetes.io/projected/02b2da44-0766-4710-9351-b550f260667e-kube-api-access-rv55f\") pod \"horizon-5d49f8894f-6hsxv\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.916879 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jjfxl" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.917450 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.924618 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jggjz"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.924846 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" event={"ID":"04d94cb3-7cc0-4f1e-96df-2b175f063923","Type":"ContainerStarted","Data":"9512e609d144445255f917480446ce5a8d8b12f555a5c9a557f34bf70e771f50"} Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.955116 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-tm9nr"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.956360 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.964348 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.964825 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.965051 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jfcwq" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.966932 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-log-httpd\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.967024 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.967096 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-db-sync-config-data\") pod \"barbican-db-sync-jggjz\" (UID: \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\") " pod="openstack/barbican-db-sync-jggjz" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.967137 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.967217 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdfk\" (UniqueName: \"kubernetes.io/projected/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-kube-api-access-xgdfk\") pod \"neutron-db-sync-tm9nr\" (UID: \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\") " pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.967273 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-combined-ca-bundle\") pod \"barbican-db-sync-jggjz\" (UID: \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\") " pod="openstack/barbican-db-sync-jggjz" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.967334 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-combined-ca-bundle\") pod \"neutron-db-sync-tm9nr\" (UID: \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\") " pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.967513 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-scripts\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.967579 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-run-httpd\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.967677 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-config\") pod \"neutron-db-sync-tm9nr\" (UID: \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\") " pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.967726 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v77j\" (UniqueName: \"kubernetes.io/projected/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-kube-api-access-2v77j\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.967764 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqs6d\" (UniqueName: \"kubernetes.io/projected/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-kube-api-access-gqs6d\") pod \"barbican-db-sync-jggjz\" (UID: \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\") " pod="openstack/barbican-db-sync-jggjz" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.967810 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-config-data\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.969318 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tm9nr"] Dec 04 06:28:27 crc kubenswrapper[4832]: I1204 06:28:27.998487 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-b5nzc"] Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.009605 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6479b8b47c-x6wkf"] Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.015625 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6479b8b47c-x6wkf"] Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.015759 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.044461 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-m66jf"] Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.046009 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.052805 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.053401 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-m66jf"] Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.071961 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.072252 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-db-sync-config-data\") pod \"barbican-db-sync-jggjz\" (UID: \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\") " pod="openstack/barbican-db-sync-jggjz" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.072379 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b41f85-afb6-4287-881f-3f98e135d7bb-logs\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.072476 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.072559 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.072638 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn4bz\" (UniqueName: \"kubernetes.io/projected/f61de78c-0748-4b52-bff7-26132bd7179c-kube-api-access-dn4bz\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.072727 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgdfk\" (UniqueName: \"kubernetes.io/projected/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-kube-api-access-xgdfk\") pod \"neutron-db-sync-tm9nr\" (UID: \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\") " pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.072806 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-combined-ca-bundle\") pod \"barbican-db-sync-jggjz\" (UID: \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\") " pod="openstack/barbican-db-sync-jggjz" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.072915 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-config\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.072998 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-combined-ca-bundle\") pod \"neutron-db-sync-tm9nr\" (UID: \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\") " pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.073083 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6b41f85-afb6-4287-881f-3f98e135d7bb-scripts\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.073164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-scripts\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.073251 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-run-httpd\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.073323 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6b41f85-afb6-4287-881f-3f98e135d7bb-horizon-secret-key\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.073425 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47zv2\" (UniqueName: \"kubernetes.io/projected/a6b41f85-afb6-4287-881f-3f98e135d7bb-kube-api-access-47zv2\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.073511 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-config\") pod \"neutron-db-sync-tm9nr\" (UID: \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\") " pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.073590 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v77j\" (UniqueName: \"kubernetes.io/projected/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-kube-api-access-2v77j\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.073677 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqs6d\" (UniqueName: \"kubernetes.io/projected/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-kube-api-access-gqs6d\") pod \"barbican-db-sync-jggjz\" (UID: \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\") " pod="openstack/barbican-db-sync-jggjz" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.073766 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-config-data\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.073893 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.073993 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-log-httpd\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.074091 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6b41f85-afb6-4287-881f-3f98e135d7bb-config-data\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.074208 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.074320 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.079223 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-db-sync-config-data\") pod \"barbican-db-sync-jggjz\" (UID: \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\") " pod="openstack/barbican-db-sync-jggjz" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.081366 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-run-httpd\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.082958 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-log-httpd\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.093320 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.100669 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-combined-ca-bundle\") pod \"barbican-db-sync-jggjz\" (UID: \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\") " pod="openstack/barbican-db-sync-jggjz" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.102027 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.111315 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-config-data\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.113171 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-config\") pod \"neutron-db-sync-tm9nr\" (UID: \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\") " pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.118107 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgdfk\" (UniqueName: \"kubernetes.io/projected/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-kube-api-access-xgdfk\") pod \"neutron-db-sync-tm9nr\" (UID: \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\") " pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.123413 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-combined-ca-bundle\") pod \"neutron-db-sync-tm9nr\" (UID: \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\") " pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.126810 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-scripts\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.136172 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.136191 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqs6d\" (UniqueName: \"kubernetes.io/projected/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-kube-api-access-gqs6d\") pod \"barbican-db-sync-jggjz\" (UID: \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\") " pod="openstack/barbican-db-sync-jggjz" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.146764 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-znj8j"] Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.148373 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.154212 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.154279 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v77j\" (UniqueName: \"kubernetes.io/projected/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-kube-api-access-2v77j\") pod \"ceilometer-0\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.156164 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.156258 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bj74g" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.163118 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177451 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6b41f85-afb6-4287-881f-3f98e135d7bb-horizon-secret-key\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177533 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e43b67ac-4870-4632-a6a2-84db802b371a-logs\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177560 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-combined-ca-bundle\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177590 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47zv2\" (UniqueName: \"kubernetes.io/projected/a6b41f85-afb6-4287-881f-3f98e135d7bb-kube-api-access-47zv2\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177638 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwjxr\" (UniqueName: \"kubernetes.io/projected/e43b67ac-4870-4632-a6a2-84db802b371a-kube-api-access-rwjxr\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177679 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6b41f85-afb6-4287-881f-3f98e135d7bb-config-data\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177700 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177725 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177753 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b41f85-afb6-4287-881f-3f98e135d7bb-logs\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177780 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177797 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn4bz\" (UniqueName: \"kubernetes.io/projected/f61de78c-0748-4b52-bff7-26132bd7179c-kube-api-access-dn4bz\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177834 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-config\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177850 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-config-data\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6b41f85-afb6-4287-881f-3f98e135d7bb-scripts\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.177887 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-scripts\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.180148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6b41f85-afb6-4287-881f-3f98e135d7bb-config-data\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.181584 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b41f85-afb6-4287-881f-3f98e135d7bb-logs\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.182002 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.182016 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6b41f85-afb6-4287-881f-3f98e135d7bb-scripts\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.189633 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.233210 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-znj8j"] Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.283078 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jggjz" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.284378 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-config-data\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.284545 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-scripts\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.284682 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e43b67ac-4870-4632-a6a2-84db802b371a-logs\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.284763 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-combined-ca-bundle\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.284900 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwjxr\" (UniqueName: \"kubernetes.io/projected/e43b67ac-4870-4632-a6a2-84db802b371a-kube-api-access-rwjxr\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.288102 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e43b67ac-4870-4632-a6a2-84db802b371a-logs\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.298427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.298906 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6b41f85-afb6-4287-881f-3f98e135d7bb-horizon-secret-key\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.299503 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-config\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.299892 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.321835 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-scripts\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.323125 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-config-data\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.326164 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-combined-ca-bundle\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.340367 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn4bz\" (UniqueName: \"kubernetes.io/projected/f61de78c-0748-4b52-bff7-26132bd7179c-kube-api-access-dn4bz\") pod \"dnsmasq-dns-58dd9ff6bc-m66jf\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.341201 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwjxr\" (UniqueName: \"kubernetes.io/projected/e43b67ac-4870-4632-a6a2-84db802b371a-kube-api-access-rwjxr\") pod \"placement-db-sync-znj8j\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.353732 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.364437 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47zv2\" (UniqueName: \"kubernetes.io/projected/a6b41f85-afb6-4287-881f-3f98e135d7bb-kube-api-access-47zv2\") pod \"horizon-6479b8b47c-x6wkf\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.365818 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.382015 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.434515 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.435951 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-znj8j" Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.611977 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l28b8"] Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.945352 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l28b8" event={"ID":"380ef86a-fae8-4946-857a-8fc69f555304","Type":"ContainerStarted","Data":"a1f7d27112ad8ab5bb946b899881da94ba7bef36c8651e257607bc7107d9ba9a"} Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.953823 4832 generic.go:334] "Generic (PLEG): container finished" podID="04d94cb3-7cc0-4f1e-96df-2b175f063923" containerID="f49089505384cdc408301e968915172e66d01a9f25abe81a105be2d4c27b604e" exitCode=0 Dec 04 06:28:28 crc kubenswrapper[4832]: I1204 06:28:28.953875 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" event={"ID":"04d94cb3-7cc0-4f1e-96df-2b175f063923","Type":"ContainerDied","Data":"f49089505384cdc408301e968915172e66d01a9f25abe81a105be2d4c27b604e"} Dec 04 06:28:29 crc kubenswrapper[4832]: I1204 06:28:29.265692 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:28:29 crc kubenswrapper[4832]: I1204 06:28:29.301900 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-b5nzc"] Dec 04 06:28:29 crc kubenswrapper[4832]: W1204 06:28:29.306363 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad67203c_822c_4d1c_89c7_dc7550446a85.slice/crio-1763d43689b3376e22c5ffef9259e3b62b5211a5f3a0047c81b75fee769fd9a8 WatchSource:0}: Error finding container 1763d43689b3376e22c5ffef9259e3b62b5211a5f3a0047c81b75fee769fd9a8: Status 404 returned error can't find the container with id 1763d43689b3376e22c5ffef9259e3b62b5211a5f3a0047c81b75fee769fd9a8 Dec 04 06:28:29 crc kubenswrapper[4832]: I1204 06:28:29.427310 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d49f8894f-6hsxv"] Dec 04 06:28:29 crc kubenswrapper[4832]: I1204 06:28:29.488467 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mxwh7"] Dec 04 06:28:29 crc kubenswrapper[4832]: I1204 06:28:29.513889 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jggjz"] Dec 04 06:28:29 crc kubenswrapper[4832]: I1204 06:28:29.539875 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tm9nr"] Dec 04 06:28:29 crc kubenswrapper[4832]: I1204 06:28:29.583525 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6479b8b47c-x6wkf"] Dec 04 06:28:29 crc kubenswrapper[4832]: I1204 06:28:29.593659 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-m66jf"] Dec 04 06:28:29 crc kubenswrapper[4832]: I1204 06:28:29.618230 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-znj8j"] Dec 04 06:28:29 crc kubenswrapper[4832]: I1204 06:28:29.968346 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" event={"ID":"ad67203c-822c-4d1c-89c7-dc7550446a85","Type":"ContainerStarted","Data":"1763d43689b3376e22c5ffef9259e3b62b5211a5f3a0047c81b75fee769fd9a8"} Dec 04 06:28:29 crc kubenswrapper[4832]: I1204 06:28:29.975337 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2","Type":"ContainerStarted","Data":"903bddc85d77cf9d8d8be5026e008318e94909062450c5228abf702589deaae3"} Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.652316 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.676708 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d49f8894f-6hsxv"] Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.704543 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b4c9bd8c5-dspfj"] Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.706374 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.726882 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b4c9bd8c5-dspfj"] Dec 04 06:28:30 crc kubenswrapper[4832]: W1204 06:28:30.860525 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a55ba05_c1ce_48f6_b8af_b3b1497554e2.slice/crio-2412a764c5052f0f9429a052f8921372c0a8d35a8c36d55ebd6ed3a8a3c3ce25 WatchSource:0}: Error finding container 2412a764c5052f0f9429a052f8921372c0a8d35a8c36d55ebd6ed3a8a3c3ce25: Status 404 returned error can't find the container with id 2412a764c5052f0f9429a052f8921372c0a8d35a8c36d55ebd6ed3a8a3c3ce25 Dec 04 06:28:30 crc kubenswrapper[4832]: W1204 06:28:30.871683 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b41f85_afb6_4287_881f_3f98e135d7bb.slice/crio-5b31b6708605011d0e7443b5ad4c35436668eed8a7a8d606f8cfb9d5b130575a WatchSource:0}: Error finding container 5b31b6708605011d0e7443b5ad4c35436668eed8a7a8d606f8cfb9d5b130575a: Status 404 returned error can't find the container with id 5b31b6708605011d0e7443b5ad4c35436668eed8a7a8d606f8cfb9d5b130575a Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.908911 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-config-data\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.909445 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-scripts\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.909492 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-horizon-secret-key\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.909528 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-logs\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.909715 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr2x7\" (UniqueName: \"kubernetes.io/projected/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-kube-api-access-mr2x7\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.990091 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mxwh7" event={"ID":"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1","Type":"ContainerStarted","Data":"d903aedc2fe384bf8fc637d67c628efc8b4639ea2676f059c1a3666ce15cd1b7"} Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.991415 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6479b8b47c-x6wkf" event={"ID":"a6b41f85-afb6-4287-881f-3f98e135d7bb","Type":"ContainerStarted","Data":"5b31b6708605011d0e7443b5ad4c35436668eed8a7a8d606f8cfb9d5b130575a"} Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.993098 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tm9nr" event={"ID":"1a55ba05-c1ce-48f6-b8af-b3b1497554e2","Type":"ContainerStarted","Data":"2412a764c5052f0f9429a052f8921372c0a8d35a8c36d55ebd6ed3a8a3c3ce25"} Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.994011 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" event={"ID":"f61de78c-0748-4b52-bff7-26132bd7179c","Type":"ContainerStarted","Data":"20d5e6c93269e18e09169a8059d26ea50b4baee15fc3584ab22300f6e3961733"} Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.996280 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jggjz" event={"ID":"bac8c79c-e51d-4e52-a5d1-1f8472db13b1","Type":"ContainerStarted","Data":"e045b31671ddc39ac8bab5d3cb398e89ae1ce34cbee03c881d0d9c64a52f9a33"} Dec 04 06:28:30 crc kubenswrapper[4832]: I1204 06:28:30.999423 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-znj8j" event={"ID":"e43b67ac-4870-4632-a6a2-84db802b371a","Type":"ContainerStarted","Data":"c3d95d584a70a2c5f8cdd4721fbfbdda7d4be764a78fd0c3336ae529cb5cd31c"} Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.001310 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d49f8894f-6hsxv" event={"ID":"02b2da44-0766-4710-9351-b550f260667e","Type":"ContainerStarted","Data":"46142587a6b95cf9c3f1a84ebeefe827014e63eea1279747b71d7e4077aaa8e0"} Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.003743 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" event={"ID":"04d94cb3-7cc0-4f1e-96df-2b175f063923","Type":"ContainerDied","Data":"9512e609d144445255f917480446ce5a8d8b12f555a5c9a557f34bf70e771f50"} Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.003786 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9512e609d144445255f917480446ce5a8d8b12f555a5c9a557f34bf70e771f50" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.011415 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-scripts\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.011481 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-horizon-secret-key\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.011504 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-logs\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.011549 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr2x7\" (UniqueName: \"kubernetes.io/projected/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-kube-api-access-mr2x7\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.011623 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-config-data\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.012263 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-logs\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.012285 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-scripts\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.013213 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-config-data\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.016605 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-horizon-secret-key\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.030104 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr2x7\" (UniqueName: \"kubernetes.io/projected/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-kube-api-access-mr2x7\") pod \"horizon-6b4c9bd8c5-dspfj\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.054291 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.180810 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.215128 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-ovsdbserver-nb\") pod \"04d94cb3-7cc0-4f1e-96df-2b175f063923\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.216427 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-config\") pod \"04d94cb3-7cc0-4f1e-96df-2b175f063923\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.216499 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-dns-svc\") pod \"04d94cb3-7cc0-4f1e-96df-2b175f063923\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.216559 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-dns-swift-storage-0\") pod \"04d94cb3-7cc0-4f1e-96df-2b175f063923\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.216606 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-ovsdbserver-sb\") pod \"04d94cb3-7cc0-4f1e-96df-2b175f063923\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.216665 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7vnd\" (UniqueName: \"kubernetes.io/projected/04d94cb3-7cc0-4f1e-96df-2b175f063923-kube-api-access-v7vnd\") pod \"04d94cb3-7cc0-4f1e-96df-2b175f063923\" (UID: \"04d94cb3-7cc0-4f1e-96df-2b175f063923\") " Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.233244 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d94cb3-7cc0-4f1e-96df-2b175f063923-kube-api-access-v7vnd" (OuterVolumeSpecName: "kube-api-access-v7vnd") pod "04d94cb3-7cc0-4f1e-96df-2b175f063923" (UID: "04d94cb3-7cc0-4f1e-96df-2b175f063923"). InnerVolumeSpecName "kube-api-access-v7vnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.266865 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "04d94cb3-7cc0-4f1e-96df-2b175f063923" (UID: "04d94cb3-7cc0-4f1e-96df-2b175f063923"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.267115 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-config" (OuterVolumeSpecName: "config") pod "04d94cb3-7cc0-4f1e-96df-2b175f063923" (UID: "04d94cb3-7cc0-4f1e-96df-2b175f063923"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.274603 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "04d94cb3-7cc0-4f1e-96df-2b175f063923" (UID: "04d94cb3-7cc0-4f1e-96df-2b175f063923"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.282232 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "04d94cb3-7cc0-4f1e-96df-2b175f063923" (UID: "04d94cb3-7cc0-4f1e-96df-2b175f063923"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.300614 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "04d94cb3-7cc0-4f1e-96df-2b175f063923" (UID: "04d94cb3-7cc0-4f1e-96df-2b175f063923"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.319083 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.319120 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.319134 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.319145 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.319155 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7vnd\" (UniqueName: \"kubernetes.io/projected/04d94cb3-7cc0-4f1e-96df-2b175f063923-kube-api-access-v7vnd\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.319163 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04d94cb3-7cc0-4f1e-96df-2b175f063923-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:31 crc kubenswrapper[4832]: I1204 06:28:31.615171 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b4c9bd8c5-dspfj"] Dec 04 06:28:31 crc kubenswrapper[4832]: W1204 06:28:31.621538 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2a34b59_464e_4a39_9f7a_c4ffe98f53f8.slice/crio-d0229f7eebc323adba1fac4fbc0b9239b2e9b49a3547ed8fdaf14668c60ba714 WatchSource:0}: Error finding container d0229f7eebc323adba1fac4fbc0b9239b2e9b49a3547ed8fdaf14668c60ba714: Status 404 returned error can't find the container with id d0229f7eebc323adba1fac4fbc0b9239b2e9b49a3547ed8fdaf14668c60ba714 Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.016061 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4c9bd8c5-dspfj" event={"ID":"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8","Type":"ContainerStarted","Data":"d0229f7eebc323adba1fac4fbc0b9239b2e9b49a3547ed8fdaf14668c60ba714"} Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.019910 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l28b8" event={"ID":"380ef86a-fae8-4946-857a-8fc69f555304","Type":"ContainerStarted","Data":"2440bee35940df428a9a1a4587d644009856959b3f8cfa8fb06ad99ad10b7e33"} Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.022096 4832 generic.go:334] "Generic (PLEG): container finished" podID="ad67203c-822c-4d1c-89c7-dc7550446a85" containerID="3c13afcf33fe16af6deb0d527295ec9ee5673e9f832f4891ac653791b6546e50" exitCode=0 Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.022158 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" event={"ID":"ad67203c-822c-4d1c-89c7-dc7550446a85","Type":"ContainerDied","Data":"3c13afcf33fe16af6deb0d527295ec9ee5673e9f832f4891ac653791b6546e50"} Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.040464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tm9nr" event={"ID":"1a55ba05-c1ce-48f6-b8af-b3b1497554e2","Type":"ContainerStarted","Data":"0cdda13a5f5f2372712c104a5b736ec90f5eb96f85cd80b0f0669f181c25d3bd"} Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.049251 4832 generic.go:334] "Generic (PLEG): container finished" podID="f61de78c-0748-4b52-bff7-26132bd7179c" containerID="e28c2795fe29c25944c1bc1087a77388c1ad339c75e632aadfecaae4794156fe" exitCode=0 Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.049359 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-b9rf7" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.050067 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" event={"ID":"f61de78c-0748-4b52-bff7-26132bd7179c","Type":"ContainerDied","Data":"e28c2795fe29c25944c1bc1087a77388c1ad339c75e632aadfecaae4794156fe"} Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.051976 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l28b8" podStartSLOduration=5.051956089 podStartE2EDuration="5.051956089s" podCreationTimestamp="2025-12-04 06:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:28:32.048474853 +0000 UTC m=+1167.661292569" watchObservedRunningTime="2025-12-04 06:28:32.051956089 +0000 UTC m=+1167.664773795" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.128147 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-tm9nr" podStartSLOduration=5.128117878 podStartE2EDuration="5.128117878s" podCreationTimestamp="2025-12-04 06:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:28:32.095260863 +0000 UTC m=+1167.708078569" watchObservedRunningTime="2025-12-04 06:28:32.128117878 +0000 UTC m=+1167.740935584" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.286517 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-b9rf7"] Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.297850 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-b9rf7"] Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.558735 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.671573 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-config\") pod \"ad67203c-822c-4d1c-89c7-dc7550446a85\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.671791 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-dns-svc\") pod \"ad67203c-822c-4d1c-89c7-dc7550446a85\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.671845 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72rrp\" (UniqueName: \"kubernetes.io/projected/ad67203c-822c-4d1c-89c7-dc7550446a85-kube-api-access-72rrp\") pod \"ad67203c-822c-4d1c-89c7-dc7550446a85\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.671904 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-dns-swift-storage-0\") pod \"ad67203c-822c-4d1c-89c7-dc7550446a85\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.672034 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-ovsdbserver-sb\") pod \"ad67203c-822c-4d1c-89c7-dc7550446a85\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.672090 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-ovsdbserver-nb\") pod \"ad67203c-822c-4d1c-89c7-dc7550446a85\" (UID: \"ad67203c-822c-4d1c-89c7-dc7550446a85\") " Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.695101 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad67203c-822c-4d1c-89c7-dc7550446a85" (UID: "ad67203c-822c-4d1c-89c7-dc7550446a85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.698628 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ad67203c-822c-4d1c-89c7-dc7550446a85" (UID: "ad67203c-822c-4d1c-89c7-dc7550446a85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.698934 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ad67203c-822c-4d1c-89c7-dc7550446a85" (UID: "ad67203c-822c-4d1c-89c7-dc7550446a85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.703635 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad67203c-822c-4d1c-89c7-dc7550446a85-kube-api-access-72rrp" (OuterVolumeSpecName: "kube-api-access-72rrp") pod "ad67203c-822c-4d1c-89c7-dc7550446a85" (UID: "ad67203c-822c-4d1c-89c7-dc7550446a85"). InnerVolumeSpecName "kube-api-access-72rrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.723090 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-config" (OuterVolumeSpecName: "config") pod "ad67203c-822c-4d1c-89c7-dc7550446a85" (UID: "ad67203c-822c-4d1c-89c7-dc7550446a85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.745354 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ad67203c-822c-4d1c-89c7-dc7550446a85" (UID: "ad67203c-822c-4d1c-89c7-dc7550446a85"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.748237 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d94cb3-7cc0-4f1e-96df-2b175f063923" path="/var/lib/kubelet/pods/04d94cb3-7cc0-4f1e-96df-2b175f063923/volumes" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.774741 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.774817 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.774829 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.774837 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.774846 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72rrp\" (UniqueName: \"kubernetes.io/projected/ad67203c-822c-4d1c-89c7-dc7550446a85-kube-api-access-72rrp\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:32 crc kubenswrapper[4832]: I1204 06:28:32.774858 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad67203c-822c-4d1c-89c7-dc7550446a85-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:33 crc kubenswrapper[4832]: I1204 06:28:33.067310 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" event={"ID":"ad67203c-822c-4d1c-89c7-dc7550446a85","Type":"ContainerDied","Data":"1763d43689b3376e22c5ffef9259e3b62b5211a5f3a0047c81b75fee769fd9a8"} Dec 04 06:28:33 crc kubenswrapper[4832]: I1204 06:28:33.067416 4832 scope.go:117] "RemoveContainer" containerID="3c13afcf33fe16af6deb0d527295ec9ee5673e9f832f4891ac653791b6546e50" Dec 04 06:28:33 crc kubenswrapper[4832]: I1204 06:28:33.067623 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-b5nzc" Dec 04 06:28:33 crc kubenswrapper[4832]: I1204 06:28:33.078408 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" event={"ID":"f61de78c-0748-4b52-bff7-26132bd7179c","Type":"ContainerStarted","Data":"07761db0fd6cd535af8520f86f18797c62fc70757ffe4205a343fa48de717863"} Dec 04 06:28:33 crc kubenswrapper[4832]: I1204 06:28:33.078634 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:33 crc kubenswrapper[4832]: I1204 06:28:33.180486 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-b5nzc"] Dec 04 06:28:33 crc kubenswrapper[4832]: I1204 06:28:33.202481 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-b5nzc"] Dec 04 06:28:33 crc kubenswrapper[4832]: I1204 06:28:33.237142 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" podStartSLOduration=6.237115225 podStartE2EDuration="6.237115225s" podCreationTimestamp="2025-12-04 06:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:28:33.13653649 +0000 UTC m=+1168.749354206" watchObservedRunningTime="2025-12-04 06:28:33.237115225 +0000 UTC m=+1168.849932931" Dec 04 06:28:34 crc kubenswrapper[4832]: I1204 06:28:34.775101 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad67203c-822c-4d1c-89c7-dc7550446a85" path="/var/lib/kubelet/pods/ad67203c-822c-4d1c-89c7-dc7550446a85/volumes" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.299039 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6479b8b47c-x6wkf"] Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.339470 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-587db8c9db-9blcn"] Dec 04 06:28:36 crc kubenswrapper[4832]: E1204 06:28:36.340102 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad67203c-822c-4d1c-89c7-dc7550446a85" containerName="init" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.340127 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad67203c-822c-4d1c-89c7-dc7550446a85" containerName="init" Dec 04 06:28:36 crc kubenswrapper[4832]: E1204 06:28:36.340176 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d94cb3-7cc0-4f1e-96df-2b175f063923" containerName="init" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.340184 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d94cb3-7cc0-4f1e-96df-2b175f063923" containerName="init" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.340436 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad67203c-822c-4d1c-89c7-dc7550446a85" containerName="init" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.340460 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d94cb3-7cc0-4f1e-96df-2b175f063923" containerName="init" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.341726 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.345925 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.354985 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-587db8c9db-9blcn"] Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.425745 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b4c9bd8c5-dspfj"] Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.503710 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-847bcdcbb8-ph9ks"] Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.505039 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6361378-b3ff-41c4-a77e-3bb4a1482984-scripts\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.505100 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6361378-b3ff-41c4-a77e-3bb4a1482984-logs\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.505142 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccfwj\" (UniqueName: \"kubernetes.io/projected/a6361378-b3ff-41c4-a77e-3bb4a1482984-kube-api-access-ccfwj\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.505226 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6361378-b3ff-41c4-a77e-3bb4a1482984-config-data\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.505312 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-horizon-tls-certs\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.505367 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.505375 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-horizon-secret-key\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.505551 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-combined-ca-bundle\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.511995 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-847bcdcbb8-ph9ks"] Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.607932 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6361378-b3ff-41c4-a77e-3bb4a1482984-logs\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.607979 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccfwj\" (UniqueName: \"kubernetes.io/projected/a6361378-b3ff-41c4-a77e-3bb4a1482984-kube-api-access-ccfwj\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.608010 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq8dk\" (UniqueName: \"kubernetes.io/projected/a75235c9-c000-495b-92d7-797733f10601-kube-api-access-lq8dk\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.608071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6361378-b3ff-41c4-a77e-3bb4a1482984-config-data\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.608150 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a75235c9-c000-495b-92d7-797733f10601-logs\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.608327 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6361378-b3ff-41c4-a77e-3bb4a1482984-logs\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.609171 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6361378-b3ff-41c4-a77e-3bb4a1482984-config-data\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.609213 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a75235c9-c000-495b-92d7-797733f10601-horizon-tls-certs\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.609272 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-horizon-tls-certs\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.610063 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-horizon-secret-key\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.610091 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-combined-ca-bundle\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.610378 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75235c9-c000-495b-92d7-797733f10601-combined-ca-bundle\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.610474 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a75235c9-c000-495b-92d7-797733f10601-scripts\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.610549 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a75235c9-c000-495b-92d7-797733f10601-config-data\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.610581 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a75235c9-c000-495b-92d7-797733f10601-horizon-secret-key\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.610601 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6361378-b3ff-41c4-a77e-3bb4a1482984-scripts\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.611113 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6361378-b3ff-41c4-a77e-3bb4a1482984-scripts\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.616773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-horizon-tls-certs\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.616793 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-horizon-secret-key\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.629010 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccfwj\" (UniqueName: \"kubernetes.io/projected/a6361378-b3ff-41c4-a77e-3bb4a1482984-kube-api-access-ccfwj\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.634611 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-combined-ca-bundle\") pod \"horizon-587db8c9db-9blcn\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.680854 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.724278 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75235c9-c000-495b-92d7-797733f10601-combined-ca-bundle\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.724339 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a75235c9-c000-495b-92d7-797733f10601-scripts\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.724384 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a75235c9-c000-495b-92d7-797733f10601-config-data\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.725262 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a75235c9-c000-495b-92d7-797733f10601-scripts\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.725761 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a75235c9-c000-495b-92d7-797733f10601-config-data\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.724426 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a75235c9-c000-495b-92d7-797733f10601-horizon-secret-key\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.725862 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq8dk\" (UniqueName: \"kubernetes.io/projected/a75235c9-c000-495b-92d7-797733f10601-kube-api-access-lq8dk\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.726724 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a75235c9-c000-495b-92d7-797733f10601-logs\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.726764 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a75235c9-c000-495b-92d7-797733f10601-horizon-tls-certs\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.727170 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a75235c9-c000-495b-92d7-797733f10601-logs\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.729878 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a75235c9-c000-495b-92d7-797733f10601-horizon-secret-key\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.731435 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a75235c9-c000-495b-92d7-797733f10601-combined-ca-bundle\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.734568 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a75235c9-c000-495b-92d7-797733f10601-horizon-tls-certs\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.745654 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq8dk\" (UniqueName: \"kubernetes.io/projected/a75235c9-c000-495b-92d7-797733f10601-kube-api-access-lq8dk\") pod \"horizon-847bcdcbb8-ph9ks\" (UID: \"a75235c9-c000-495b-92d7-797733f10601\") " pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:36 crc kubenswrapper[4832]: I1204 06:28:36.860862 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:28:38 crc kubenswrapper[4832]: I1204 06:28:38.169274 4832 generic.go:334] "Generic (PLEG): container finished" podID="380ef86a-fae8-4946-857a-8fc69f555304" containerID="2440bee35940df428a9a1a4587d644009856959b3f8cfa8fb06ad99ad10b7e33" exitCode=0 Dec 04 06:28:38 crc kubenswrapper[4832]: I1204 06:28:38.169365 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l28b8" event={"ID":"380ef86a-fae8-4946-857a-8fc69f555304","Type":"ContainerDied","Data":"2440bee35940df428a9a1a4587d644009856959b3f8cfa8fb06ad99ad10b7e33"} Dec 04 06:28:38 crc kubenswrapper[4832]: I1204 06:28:38.436571 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:28:38 crc kubenswrapper[4832]: I1204 06:28:38.511106 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jqc8n"] Dec 04 06:28:38 crc kubenswrapper[4832]: I1204 06:28:38.511369 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-jqc8n" podUID="3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" containerName="dnsmasq-dns" containerID="cri-o://20420f08dfa209d75bae24fcf9408dc638ca0c410550ae1c14cfc26ae3158132" gracePeriod=10 Dec 04 06:28:39 crc kubenswrapper[4832]: I1204 06:28:39.181879 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jqc8n" event={"ID":"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08","Type":"ContainerDied","Data":"20420f08dfa209d75bae24fcf9408dc638ca0c410550ae1c14cfc26ae3158132"} Dec 04 06:28:39 crc kubenswrapper[4832]: I1204 06:28:39.181835 4832 generic.go:334] "Generic (PLEG): container finished" podID="3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" containerID="20420f08dfa209d75bae24fcf9408dc638ca0c410550ae1c14cfc26ae3158132" exitCode=0 Dec 04 06:28:42 crc kubenswrapper[4832]: I1204 06:28:42.389621 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-jqc8n" podUID="3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Dec 04 06:28:44 crc kubenswrapper[4832]: I1204 06:28:44.237048 4832 generic.go:334] "Generic (PLEG): container finished" podID="8f4298e5-b22d-4f71-b682-87539fc2bae7" containerID="1ec08c6c9c2e2e5c28d7a348235144809cb25febbfc0c813e4b9901c2fae084c" exitCode=0 Dec 04 06:28:44 crc kubenswrapper[4832]: I1204 06:28:44.237104 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lckgk" event={"ID":"8f4298e5-b22d-4f71-b682-87539fc2bae7","Type":"ContainerDied","Data":"1ec08c6c9c2e2e5c28d7a348235144809cb25febbfc0c813e4b9901c2fae084c"} Dec 04 06:28:46 crc kubenswrapper[4832]: E1204 06:28:46.977189 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 04 06:28:46 crc kubenswrapper[4832]: E1204 06:28:46.977853 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd6h7fh5fch5dfh677hf9hd7h587hdchbch89h567h5b4h75hf6h657h569h9fh5b6h566h564h564h554h5c9h694h9h667h65bh5f9h7bh565h97q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rv55f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5d49f8894f-6hsxv_openstack(02b2da44-0766-4710-9351-b550f260667e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:28:46 crc kubenswrapper[4832]: E1204 06:28:46.981133 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 04 06:28:46 crc kubenswrapper[4832]: E1204 06:28:46.981247 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndbh5b9h6bh547h558h576h646h59fh58h8ch8fh574hc5h54h695h67ch5c6h7bhbfh87h67bh677h584h666h68dh5c8h549h5ddh5ch56h66h89q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47zv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6479b8b47c-x6wkf_openstack(a6b41f85-afb6-4287-881f-3f98e135d7bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:28:47 crc kubenswrapper[4832]: E1204 06:28:47.008953 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6479b8b47c-x6wkf" podUID="a6b41f85-afb6-4287-881f-3f98e135d7bb" Dec 04 06:28:47 crc kubenswrapper[4832]: E1204 06:28:47.009132 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5d49f8894f-6hsxv" podUID="02b2da44-0766-4710-9351-b550f260667e" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.092877 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.202863 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-combined-ca-bundle\") pod \"380ef86a-fae8-4946-857a-8fc69f555304\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.203055 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-credential-keys\") pod \"380ef86a-fae8-4946-857a-8fc69f555304\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.203092 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-config-data\") pod \"380ef86a-fae8-4946-857a-8fc69f555304\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.203118 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-scripts\") pod \"380ef86a-fae8-4946-857a-8fc69f555304\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.203183 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-fernet-keys\") pod \"380ef86a-fae8-4946-857a-8fc69f555304\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.203248 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkqc9\" (UniqueName: \"kubernetes.io/projected/380ef86a-fae8-4946-857a-8fc69f555304-kube-api-access-vkqc9\") pod \"380ef86a-fae8-4946-857a-8fc69f555304\" (UID: \"380ef86a-fae8-4946-857a-8fc69f555304\") " Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.214016 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "380ef86a-fae8-4946-857a-8fc69f555304" (UID: "380ef86a-fae8-4946-857a-8fc69f555304"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.213908 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "380ef86a-fae8-4946-857a-8fc69f555304" (UID: "380ef86a-fae8-4946-857a-8fc69f555304"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.214136 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-scripts" (OuterVolumeSpecName: "scripts") pod "380ef86a-fae8-4946-857a-8fc69f555304" (UID: "380ef86a-fae8-4946-857a-8fc69f555304"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.220275 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/380ef86a-fae8-4946-857a-8fc69f555304-kube-api-access-vkqc9" (OuterVolumeSpecName: "kube-api-access-vkqc9") pod "380ef86a-fae8-4946-857a-8fc69f555304" (UID: "380ef86a-fae8-4946-857a-8fc69f555304"). InnerVolumeSpecName "kube-api-access-vkqc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.241948 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "380ef86a-fae8-4946-857a-8fc69f555304" (UID: "380ef86a-fae8-4946-857a-8fc69f555304"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.249551 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-config-data" (OuterVolumeSpecName: "config-data") pod "380ef86a-fae8-4946-857a-8fc69f555304" (UID: "380ef86a-fae8-4946-857a-8fc69f555304"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.268901 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l28b8" event={"ID":"380ef86a-fae8-4946-857a-8fc69f555304","Type":"ContainerDied","Data":"a1f7d27112ad8ab5bb946b899881da94ba7bef36c8651e257607bc7107d9ba9a"} Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.269017 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1f7d27112ad8ab5bb946b899881da94ba7bef36c8651e257607bc7107d9ba9a" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.269022 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l28b8" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.306361 4832 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.306498 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.306516 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.306532 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.306545 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkqc9\" (UniqueName: \"kubernetes.io/projected/380ef86a-fae8-4946-857a-8fc69f555304-kube-api-access-vkqc9\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:47 crc kubenswrapper[4832]: I1204 06:28:47.306558 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380ef86a-fae8-4946-857a-8fc69f555304-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:47 crc kubenswrapper[4832]: E1204 06:28:47.771455 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 04 06:28:47 crc kubenswrapper[4832]: E1204 06:28:47.771971 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqs6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-jggjz_openstack(bac8c79c-e51d-4e52-a5d1-1f8472db13b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:28:47 crc kubenswrapper[4832]: E1204 06:28:47.773190 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-jggjz" podUID="bac8c79c-e51d-4e52-a5d1-1f8472db13b1" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.266228 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l28b8"] Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.275256 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l28b8"] Dec 04 06:28:48 crc kubenswrapper[4832]: E1204 06:28:48.281997 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-jggjz" podUID="bac8c79c-e51d-4e52-a5d1-1f8472db13b1" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.379985 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nm7x4"] Dec 04 06:28:48 crc kubenswrapper[4832]: E1204 06:28:48.380498 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380ef86a-fae8-4946-857a-8fc69f555304" containerName="keystone-bootstrap" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.380512 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="380ef86a-fae8-4946-857a-8fc69f555304" containerName="keystone-bootstrap" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.380686 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="380ef86a-fae8-4946-857a-8fc69f555304" containerName="keystone-bootstrap" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.381433 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.383697 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.383944 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.383972 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.384093 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4k75v" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.384250 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.388800 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nm7x4"] Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.436693 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-credential-keys\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.436763 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-config-data\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.436835 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-fernet-keys\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.436865 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-scripts\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.436979 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-combined-ca-bundle\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.437052 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cssb6\" (UniqueName: \"kubernetes.io/projected/28ae9519-5721-4fbb-87b1-3b215638adaf-kube-api-access-cssb6\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.538996 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-credential-keys\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.539071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-config-data\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.539110 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-fernet-keys\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.539139 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-scripts\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.539158 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-combined-ca-bundle\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.539182 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cssb6\" (UniqueName: \"kubernetes.io/projected/28ae9519-5721-4fbb-87b1-3b215638adaf-kube-api-access-cssb6\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.545102 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-fernet-keys\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.546114 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-credential-keys\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.550102 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-scripts\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.550743 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-config-data\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.554052 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-combined-ca-bundle\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.557209 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cssb6\" (UniqueName: \"kubernetes.io/projected/28ae9519-5721-4fbb-87b1-3b215638adaf-kube-api-access-cssb6\") pod \"keystone-bootstrap-nm7x4\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.712946 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:28:48 crc kubenswrapper[4832]: I1204 06:28:48.723711 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="380ef86a-fae8-4946-857a-8fc69f555304" path="/var/lib/kubelet/pods/380ef86a-fae8-4946-857a-8fc69f555304/volumes" Dec 04 06:28:52 crc kubenswrapper[4832]: I1204 06:28:52.390184 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-jqc8n" podUID="3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Dec 04 06:28:56 crc kubenswrapper[4832]: E1204 06:28:56.055199 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 04 06:28:56 crc kubenswrapper[4832]: E1204 06:28:56.055823 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc7h545h9chffh75h658h5ddh84h659h5dch5d5h566h66h75h648h5d6h67bh7hcfhfchc7h7bh74h97h564h54ch686hcbh5c8h5c4h5dch66bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2v77j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(fe54a9ec-6e1c-4745-95df-4c56a07ce2f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.190253 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.205424 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lckgk" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.205627 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.222801 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.325518 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47zv2\" (UniqueName: \"kubernetes.io/projected/a6b41f85-afb6-4287-881f-3f98e135d7bb-kube-api-access-47zv2\") pod \"a6b41f85-afb6-4287-881f-3f98e135d7bb\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.325569 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02b2da44-0766-4710-9351-b550f260667e-scripts\") pod \"02b2da44-0766-4710-9351-b550f260667e\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.325672 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b2da44-0766-4710-9351-b550f260667e-logs\") pod \"02b2da44-0766-4710-9351-b550f260667e\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.325715 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-config\") pod \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.325767 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6b41f85-afb6-4287-881f-3f98e135d7bb-horizon-secret-key\") pod \"a6b41f85-afb6-4287-881f-3f98e135d7bb\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.325787 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-combined-ca-bundle\") pod \"8f4298e5-b22d-4f71-b682-87539fc2bae7\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.325823 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b41f85-afb6-4287-881f-3f98e135d7bb-logs\") pod \"a6b41f85-afb6-4287-881f-3f98e135d7bb\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.325850 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-db-sync-config-data\") pod \"8f4298e5-b22d-4f71-b682-87539fc2bae7\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.325872 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv55f\" (UniqueName: \"kubernetes.io/projected/02b2da44-0766-4710-9351-b550f260667e-kube-api-access-rv55f\") pod \"02b2da44-0766-4710-9351-b550f260667e\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.325921 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-ovsdbserver-nb\") pod \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.325949 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsscn\" (UniqueName: \"kubernetes.io/projected/8f4298e5-b22d-4f71-b682-87539fc2bae7-kube-api-access-dsscn\") pod \"8f4298e5-b22d-4f71-b682-87539fc2bae7\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.325981 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02b2da44-0766-4710-9351-b550f260667e-horizon-secret-key\") pod \"02b2da44-0766-4710-9351-b550f260667e\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.326002 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-config-data\") pod \"8f4298e5-b22d-4f71-b682-87539fc2bae7\" (UID: \"8f4298e5-b22d-4f71-b682-87539fc2bae7\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.326045 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02b2da44-0766-4710-9351-b550f260667e-config-data\") pod \"02b2da44-0766-4710-9351-b550f260667e\" (UID: \"02b2da44-0766-4710-9351-b550f260667e\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.326063 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6b41f85-afb6-4287-881f-3f98e135d7bb-config-data\") pod \"a6b41f85-afb6-4287-881f-3f98e135d7bb\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.326082 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-dns-svc\") pod \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.326144 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-ovsdbserver-sb\") pod \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.326162 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5hhl\" (UniqueName: \"kubernetes.io/projected/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-kube-api-access-t5hhl\") pod \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\" (UID: \"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.326194 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6b41f85-afb6-4287-881f-3f98e135d7bb-scripts\") pod \"a6b41f85-afb6-4287-881f-3f98e135d7bb\" (UID: \"a6b41f85-afb6-4287-881f-3f98e135d7bb\") " Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.326341 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6b41f85-afb6-4287-881f-3f98e135d7bb-logs" (OuterVolumeSpecName: "logs") pod "a6b41f85-afb6-4287-881f-3f98e135d7bb" (UID: "a6b41f85-afb6-4287-881f-3f98e135d7bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.326977 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b41f85-afb6-4287-881f-3f98e135d7bb-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.327142 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b41f85-afb6-4287-881f-3f98e135d7bb-scripts" (OuterVolumeSpecName: "scripts") pod "a6b41f85-afb6-4287-881f-3f98e135d7bb" (UID: "a6b41f85-afb6-4287-881f-3f98e135d7bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.328245 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b2da44-0766-4710-9351-b550f260667e-logs" (OuterVolumeSpecName: "logs") pod "02b2da44-0766-4710-9351-b550f260667e" (UID: "02b2da44-0766-4710-9351-b550f260667e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.328308 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b2da44-0766-4710-9351-b550f260667e-scripts" (OuterVolumeSpecName: "scripts") pod "02b2da44-0766-4710-9351-b550f260667e" (UID: "02b2da44-0766-4710-9351-b550f260667e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.329726 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b2da44-0766-4710-9351-b550f260667e-config-data" (OuterVolumeSpecName: "config-data") pod "02b2da44-0766-4710-9351-b550f260667e" (UID: "02b2da44-0766-4710-9351-b550f260667e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.330632 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b41f85-afb6-4287-881f-3f98e135d7bb-kube-api-access-47zv2" (OuterVolumeSpecName: "kube-api-access-47zv2") pod "a6b41f85-afb6-4287-881f-3f98e135d7bb" (UID: "a6b41f85-afb6-4287-881f-3f98e135d7bb"). InnerVolumeSpecName "kube-api-access-47zv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.331315 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b41f85-afb6-4287-881f-3f98e135d7bb-config-data" (OuterVolumeSpecName: "config-data") pod "a6b41f85-afb6-4287-881f-3f98e135d7bb" (UID: "a6b41f85-afb6-4287-881f-3f98e135d7bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.332776 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b41f85-afb6-4287-881f-3f98e135d7bb-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a6b41f85-afb6-4287-881f-3f98e135d7bb" (UID: "a6b41f85-afb6-4287-881f-3f98e135d7bb"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.335181 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f4298e5-b22d-4f71-b682-87539fc2bae7-kube-api-access-dsscn" (OuterVolumeSpecName: "kube-api-access-dsscn") pod "8f4298e5-b22d-4f71-b682-87539fc2bae7" (UID: "8f4298e5-b22d-4f71-b682-87539fc2bae7"). InnerVolumeSpecName "kube-api-access-dsscn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.335669 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-kube-api-access-t5hhl" (OuterVolumeSpecName: "kube-api-access-t5hhl") pod "3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" (UID: "3cfd2b4e-14fc-406e-87e7-b7bcee62ea08"). InnerVolumeSpecName "kube-api-access-t5hhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.339083 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b2da44-0766-4710-9351-b550f260667e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "02b2da44-0766-4710-9351-b550f260667e" (UID: "02b2da44-0766-4710-9351-b550f260667e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.340445 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8f4298e5-b22d-4f71-b682-87539fc2bae7" (UID: "8f4298e5-b22d-4f71-b682-87539fc2bae7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.345449 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b2da44-0766-4710-9351-b550f260667e-kube-api-access-rv55f" (OuterVolumeSpecName: "kube-api-access-rv55f") pod "02b2da44-0766-4710-9351-b550f260667e" (UID: "02b2da44-0766-4710-9351-b550f260667e"). InnerVolumeSpecName "kube-api-access-rv55f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.352252 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6479b8b47c-x6wkf" event={"ID":"a6b41f85-afb6-4287-881f-3f98e135d7bb","Type":"ContainerDied","Data":"5b31b6708605011d0e7443b5ad4c35436668eed8a7a8d606f8cfb9d5b130575a"} Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.352291 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6479b8b47c-x6wkf" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.353534 4832 generic.go:334] "Generic (PLEG): container finished" podID="1a55ba05-c1ce-48f6-b8af-b3b1497554e2" containerID="0cdda13a5f5f2372712c104a5b736ec90f5eb96f85cd80b0f0669f181c25d3bd" exitCode=0 Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.353589 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tm9nr" event={"ID":"1a55ba05-c1ce-48f6-b8af-b3b1497554e2","Type":"ContainerDied","Data":"0cdda13a5f5f2372712c104a5b736ec90f5eb96f85cd80b0f0669f181c25d3bd"} Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.357016 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jqc8n" event={"ID":"3cfd2b4e-14fc-406e-87e7-b7bcee62ea08","Type":"ContainerDied","Data":"b909a16564ac6ba9e8ae273bacca9763dfdc108e4152c971ff982fb1ac98a5ff"} Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.357079 4832 scope.go:117] "RemoveContainer" containerID="20420f08dfa209d75bae24fcf9408dc638ca0c410550ae1c14cfc26ae3158132" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.357248 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jqc8n" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.359447 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d49f8894f-6hsxv" event={"ID":"02b2da44-0766-4710-9351-b550f260667e","Type":"ContainerDied","Data":"46142587a6b95cf9c3f1a84ebeefe827014e63eea1279747b71d7e4077aaa8e0"} Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.359548 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d49f8894f-6hsxv" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.363108 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f4298e5-b22d-4f71-b682-87539fc2bae7" (UID: "8f4298e5-b22d-4f71-b682-87539fc2bae7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.363230 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-lckgk" event={"ID":"8f4298e5-b22d-4f71-b682-87539fc2bae7","Type":"ContainerDied","Data":"e9eacaeef7b6abf53d798224a96e7f69b5221ae7da349079d898b76fde17caac"} Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.363251 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9eacaeef7b6abf53d798224a96e7f69b5221ae7da349079d898b76fde17caac" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.363293 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-lckgk" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.392433 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" (UID: "3cfd2b4e-14fc-406e-87e7-b7bcee62ea08"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.392581 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-config" (OuterVolumeSpecName: "config") pod "3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" (UID: "3cfd2b4e-14fc-406e-87e7-b7bcee62ea08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.392734 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" (UID: "3cfd2b4e-14fc-406e-87e7-b7bcee62ea08"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.394591 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-config-data" (OuterVolumeSpecName: "config-data") pod "8f4298e5-b22d-4f71-b682-87539fc2bae7" (UID: "8f4298e5-b22d-4f71-b682-87539fc2bae7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.406801 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" (UID: "3cfd2b4e-14fc-406e-87e7-b7bcee62ea08"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429154 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47zv2\" (UniqueName: \"kubernetes.io/projected/a6b41f85-afb6-4287-881f-3f98e135d7bb-kube-api-access-47zv2\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429183 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02b2da44-0766-4710-9351-b550f260667e-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429194 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b2da44-0766-4710-9351-b550f260667e-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429203 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429215 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6b41f85-afb6-4287-881f-3f98e135d7bb-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429223 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429232 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429240 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv55f\" (UniqueName: \"kubernetes.io/projected/02b2da44-0766-4710-9351-b550f260667e-kube-api-access-rv55f\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429248 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429257 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsscn\" (UniqueName: \"kubernetes.io/projected/8f4298e5-b22d-4f71-b682-87539fc2bae7-kube-api-access-dsscn\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429264 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02b2da44-0766-4710-9351-b550f260667e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429272 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4298e5-b22d-4f71-b682-87539fc2bae7-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429281 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02b2da44-0766-4710-9351-b550f260667e-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429289 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6b41f85-afb6-4287-881f-3f98e135d7bb-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429300 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429308 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429316 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5hhl\" (UniqueName: \"kubernetes.io/projected/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08-kube-api-access-t5hhl\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.429326 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6b41f85-afb6-4287-881f-3f98e135d7bb-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.533470 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d49f8894f-6hsxv"] Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.541797 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5d49f8894f-6hsxv"] Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.558476 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6479b8b47c-x6wkf"] Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.565753 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6479b8b47c-x6wkf"] Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.698513 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jqc8n"] Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.707955 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jqc8n"] Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.726167 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b2da44-0766-4710-9351-b550f260667e" path="/var/lib/kubelet/pods/02b2da44-0766-4710-9351-b550f260667e/volumes" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.726634 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" path="/var/lib/kubelet/pods/3cfd2b4e-14fc-406e-87e7-b7bcee62ea08/volumes" Dec 04 06:28:56 crc kubenswrapper[4832]: I1204 06:28:56.727554 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b41f85-afb6-4287-881f-3f98e135d7bb" path="/var/lib/kubelet/pods/a6b41f85-afb6-4287-881f-3f98e135d7bb/volumes" Dec 04 06:28:57 crc kubenswrapper[4832]: E1204 06:28:57.318792 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 04 06:28:57 crc kubenswrapper[4832]: E1204 06:28:57.319012 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhd7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-mxwh7_openstack(0f50b7d2-4e8d-4905-85ec-811cdd3c60d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:28:57 crc kubenswrapper[4832]: E1204 06:28:57.320261 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-mxwh7" podUID="0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.323174 4832 scope.go:117] "RemoveContainer" containerID="30d4b491d2bef5aa9d4e743bedad058c3203e3e0186bbb68b930845e886e4571" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.391549 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-jqc8n" podUID="3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Dec 04 06:28:57 crc kubenswrapper[4832]: E1204 06:28:57.418032 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-mxwh7" podUID="0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.790761 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vf7sm"] Dec 04 06:28:57 crc kubenswrapper[4832]: E1204 06:28:57.791594 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" containerName="init" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.791610 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" containerName="init" Dec 04 06:28:57 crc kubenswrapper[4832]: E1204 06:28:57.791629 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" containerName="dnsmasq-dns" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.791636 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" containerName="dnsmasq-dns" Dec 04 06:28:57 crc kubenswrapper[4832]: E1204 06:28:57.791669 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4298e5-b22d-4f71-b682-87539fc2bae7" containerName="glance-db-sync" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.791676 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4298e5-b22d-4f71-b682-87539fc2bae7" containerName="glance-db-sync" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.791869 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f4298e5-b22d-4f71-b682-87539fc2bae7" containerName="glance-db-sync" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.791885 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cfd2b4e-14fc-406e-87e7-b7bcee62ea08" containerName="dnsmasq-dns" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.792831 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.943971 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-config\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.944058 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.944216 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.944252 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.944312 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8wjv\" (UniqueName: \"kubernetes.io/projected/697880da-0a7d-46a5-9753-7c42d146f4d4-kube-api-access-r8wjv\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.944360 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:57 crc kubenswrapper[4832]: I1204 06:28:57.945595 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vf7sm"] Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.051129 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-config\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.051185 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.051232 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.051252 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.051277 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8wjv\" (UniqueName: \"kubernetes.io/projected/697880da-0a7d-46a5-9753-7c42d146f4d4-kube-api-access-r8wjv\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.051298 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.052057 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.052606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-config\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.053125 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.053676 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.054186 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.104005 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8wjv\" (UniqueName: \"kubernetes.io/projected/697880da-0a7d-46a5-9753-7c42d146f4d4-kube-api-access-r8wjv\") pod \"dnsmasq-dns-785d8bcb8c-vf7sm\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.145929 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.205263 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.237424 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-587db8c9db-9blcn"] Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.301828 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nm7x4"] Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.319976 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-847bcdcbb8-ph9ks"] Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.360708 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-combined-ca-bundle\") pod \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\" (UID: \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\") " Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.361140 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-config\") pod \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\" (UID: \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\") " Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.361392 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgdfk\" (UniqueName: \"kubernetes.io/projected/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-kube-api-access-xgdfk\") pod \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\" (UID: \"1a55ba05-c1ce-48f6-b8af-b3b1497554e2\") " Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.371678 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-kube-api-access-xgdfk" (OuterVolumeSpecName: "kube-api-access-xgdfk") pod "1a55ba05-c1ce-48f6-b8af-b3b1497554e2" (UID: "1a55ba05-c1ce-48f6-b8af-b3b1497554e2"). InnerVolumeSpecName "kube-api-access-xgdfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.448474 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-config" (OuterVolumeSpecName: "config") pod "1a55ba05-c1ce-48f6-b8af-b3b1497554e2" (UID: "1a55ba05-c1ce-48f6-b8af-b3b1497554e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.452964 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-847bcdcbb8-ph9ks" event={"ID":"a75235c9-c000-495b-92d7-797733f10601","Type":"ContainerStarted","Data":"13904e7f88ed961016118c48aae55846eefa656299696f242febb8207a058821"} Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.453362 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a55ba05-c1ce-48f6-b8af-b3b1497554e2" (UID: "1a55ba05-c1ce-48f6-b8af-b3b1497554e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.459318 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tm9nr" event={"ID":"1a55ba05-c1ce-48f6-b8af-b3b1497554e2","Type":"ContainerDied","Data":"2412a764c5052f0f9429a052f8921372c0a8d35a8c36d55ebd6ed3a8a3c3ce25"} Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.459367 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2412a764c5052f0f9429a052f8921372c0a8d35a8c36d55ebd6ed3a8a3c3ce25" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.459449 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tm9nr" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.465153 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgdfk\" (UniqueName: \"kubernetes.io/projected/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-kube-api-access-xgdfk\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.465196 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.465211 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a55ba05-c1ce-48f6-b8af-b3b1497554e2-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.482844 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nm7x4" event={"ID":"28ae9519-5721-4fbb-87b1-3b215638adaf","Type":"ContainerStarted","Data":"099f7006b860bd5f2390f0821acddc52f925b88fe6cef08aa31b3b0c1f06ac28"} Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.520912 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-znj8j" event={"ID":"e43b67ac-4870-4632-a6a2-84db802b371a","Type":"ContainerStarted","Data":"912ef879e31ce80a1aba6c344f0f1cd2d3219d5f4850babbe664cc754b984bfa"} Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.536444 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4c9bd8c5-dspfj" event={"ID":"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8","Type":"ContainerStarted","Data":"627119cc4d430756dd1761c5990b5a4d00fa85542b31eaf6ede412bc3955cd51"} Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.536509 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4c9bd8c5-dspfj" event={"ID":"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8","Type":"ContainerStarted","Data":"6aa66e15069995f67c76bbfb6d21b690ba298f03c0cb13174464ecce3141cfca"} Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.536688 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b4c9bd8c5-dspfj" podUID="c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" containerName="horizon-log" containerID="cri-o://6aa66e15069995f67c76bbfb6d21b690ba298f03c0cb13174464ecce3141cfca" gracePeriod=30 Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.542269 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b4c9bd8c5-dspfj" podUID="c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" containerName="horizon" containerID="cri-o://627119cc4d430756dd1761c5990b5a4d00fa85542b31eaf6ede412bc3955cd51" gracePeriod=30 Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.556212 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587db8c9db-9blcn" event={"ID":"a6361378-b3ff-41c4-a77e-3bb4a1482984","Type":"ContainerStarted","Data":"0c4236bf8642901bb753b205a81ffffdc00c5382448c49d79904a8433a58a45d"} Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.570565 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-znj8j" podStartSLOduration=6.426655545 podStartE2EDuration="31.570533858s" podCreationTimestamp="2025-12-04 06:28:27 +0000 UTC" firstStartedPulling="2025-12-04 06:28:30.908780314 +0000 UTC m=+1166.521598020" lastFinishedPulling="2025-12-04 06:28:56.052658627 +0000 UTC m=+1191.665476333" observedRunningTime="2025-12-04 06:28:58.547439106 +0000 UTC m=+1194.160256832" watchObservedRunningTime="2025-12-04 06:28:58.570533858 +0000 UTC m=+1194.183351584" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.586983 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b4c9bd8c5-dspfj" podStartSLOduration=4.135970118 podStartE2EDuration="28.586959796s" podCreationTimestamp="2025-12-04 06:28:30 +0000 UTC" firstStartedPulling="2025-12-04 06:28:31.623909131 +0000 UTC m=+1167.236726837" lastFinishedPulling="2025-12-04 06:28:56.074898819 +0000 UTC m=+1191.687716515" observedRunningTime="2025-12-04 06:28:58.5713807 +0000 UTC m=+1194.184198406" watchObservedRunningTime="2025-12-04 06:28:58.586959796 +0000 UTC m=+1194.199777502" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.630130 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vf7sm"] Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.661469 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-8z88f"] Dec 04 06:28:58 crc kubenswrapper[4832]: E1204 06:28:58.661985 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a55ba05-c1ce-48f6-b8af-b3b1497554e2" containerName="neutron-db-sync" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.662004 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a55ba05-c1ce-48f6-b8af-b3b1497554e2" containerName="neutron-db-sync" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.662249 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a55ba05-c1ce-48f6-b8af-b3b1497554e2" containerName="neutron-db-sync" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.663326 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.667021 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-8z88f"] Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.771545 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.771594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.771674 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx7mq\" (UniqueName: \"kubernetes.io/projected/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-kube-api-access-tx7mq\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.772030 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-config\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.772257 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.772547 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-dns-svc\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.846855 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c689555f6-jht44"] Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.848740 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.851338 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.851724 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.855723 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.855911 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jfcwq" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.861459 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vf7sm"] Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.874114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.874210 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-dns-svc\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.874255 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.874276 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.874369 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx7mq\" (UniqueName: \"kubernetes.io/projected/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-kube-api-access-tx7mq\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.874425 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-config\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.877363 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.877937 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.878096 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-config\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.880917 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.882417 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-dns-svc\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.882479 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c689555f6-jht44"] Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.909024 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.910312 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx7mq\" (UniqueName: \"kubernetes.io/projected/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-kube-api-access-tx7mq\") pod \"dnsmasq-dns-55f844cf75-8z88f\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.911051 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.914007 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.914757 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.919006 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5mhp8" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.920993 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.977234 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-combined-ca-bundle\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.977637 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zth6v\" (UniqueName: \"kubernetes.io/projected/2d4a9484-df22-4c06-bd80-b71c9b785d40-kube-api-access-zth6v\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.977761 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95a663f4-e6ab-4854-b1f0-3e98c6c39515-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.977812 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.977881 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-scripts\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.977914 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zqgn\" (UniqueName: \"kubernetes.io/projected/95a663f4-e6ab-4854-b1f0-3e98c6c39515-kube-api-access-5zqgn\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.978095 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-httpd-config\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.978216 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-config-data\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.978506 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-config\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.978565 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a663f4-e6ab-4854-b1f0-3e98c6c39515-logs\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.978595 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-ovndb-tls-certs\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:58 crc kubenswrapper[4832]: I1204 06:28:58.979209 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.011856 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.081761 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-config-data\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.081848 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-config\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.081890 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a663f4-e6ab-4854-b1f0-3e98c6c39515-logs\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.081911 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-ovndb-tls-certs\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.081940 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.081975 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-combined-ca-bundle\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.081993 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zth6v\" (UniqueName: \"kubernetes.io/projected/2d4a9484-df22-4c06-bd80-b71c9b785d40-kube-api-access-zth6v\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.082012 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95a663f4-e6ab-4854-b1f0-3e98c6c39515-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.082033 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.082069 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-scripts\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.082097 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zqgn\" (UniqueName: \"kubernetes.io/projected/95a663f4-e6ab-4854-b1f0-3e98c6c39515-kube-api-access-5zqgn\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.082149 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-httpd-config\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.086627 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.086675 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a663f4-e6ab-4854-b1f0-3e98c6c39515-logs\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.087029 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95a663f4-e6ab-4854-b1f0-3e98c6c39515-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.088105 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-httpd-config\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.094365 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-scripts\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.096611 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.106251 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-config-data\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.108092 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-config\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.108606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-combined-ca-bundle\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.110881 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.112082 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-ovndb-tls-certs\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.112960 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.121915 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zqgn\" (UniqueName: \"kubernetes.io/projected/95a663f4-e6ab-4854-b1f0-3e98c6c39515-kube-api-access-5zqgn\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.131718 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.134738 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.153427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zth6v\" (UniqueName: \"kubernetes.io/projected/2d4a9484-df22-4c06-bd80-b71c9b785d40-kube-api-access-zth6v\") pod \"neutron-7c689555f6-jht44\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.170598 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.172698 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.186475 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lprhd\" (UniqueName: \"kubernetes.io/projected/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-kube-api-access-lprhd\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.186578 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.186623 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.186647 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.186719 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.186753 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.186799 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.247159 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.289909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.290416 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.290548 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.290587 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.290641 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.290717 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lprhd\" (UniqueName: \"kubernetes.io/projected/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-kube-api-access-lprhd\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.290772 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.290815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.291394 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.291687 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.301069 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.306980 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.322180 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.324712 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lprhd\" (UniqueName: \"kubernetes.io/projected/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-kube-api-access-lprhd\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.338592 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.471069 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 06:28:59 crc kubenswrapper[4832]: W1204 06:28:59.477862 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod697880da_0a7d_46a5_9753_7c42d146f4d4.slice/crio-6214721fcf836ba199cc17ae94ab9b07a0b86072fdc8d2504b4f6ca315175274 WatchSource:0}: Error finding container 6214721fcf836ba199cc17ae94ab9b07a0b86072fdc8d2504b4f6ca315175274: Status 404 returned error can't find the container with id 6214721fcf836ba199cc17ae94ab9b07a0b86072fdc8d2504b4f6ca315175274 Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.566697 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587db8c9db-9blcn" event={"ID":"a6361378-b3ff-41c4-a77e-3bb4a1482984","Type":"ContainerStarted","Data":"e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6"} Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.567886 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" event={"ID":"697880da-0a7d-46a5-9753-7c42d146f4d4","Type":"ContainerStarted","Data":"6214721fcf836ba199cc17ae94ab9b07a0b86072fdc8d2504b4f6ca315175274"} Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.569510 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-847bcdcbb8-ph9ks" event={"ID":"a75235c9-c000-495b-92d7-797733f10601","Type":"ContainerStarted","Data":"5fb41cfe32d1257b2f9dc881ae58df046543c2310159297132999f318daea37c"} Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.577342 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nm7x4" event={"ID":"28ae9519-5721-4fbb-87b1-3b215638adaf","Type":"ContainerStarted","Data":"0a8620eccdd41a2c81ab4d36e60b401e7aae2336bf55e67c8b07672226d3c4f8"} Dec 04 06:28:59 crc kubenswrapper[4832]: I1204 06:28:59.608237 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nm7x4" podStartSLOduration=11.608201386 podStartE2EDuration="11.608201386s" podCreationTimestamp="2025-12-04 06:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:28:59.602847083 +0000 UTC m=+1195.215664799" watchObservedRunningTime="2025-12-04 06:28:59.608201386 +0000 UTC m=+1195.221019092" Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.367790 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.547136 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-8z88f"] Dec 04 06:29:00 crc kubenswrapper[4832]: W1204 06:29:00.557573 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7e622fe_ec6f_4ae8_bac9_0f3a5109c034.slice/crio-cb4e87cb7328e56daa8df8c4ec84cd458c9e4704a1c85ad9b09dbc66b8e53f1f WatchSource:0}: Error finding container cb4e87cb7328e56daa8df8c4ec84cd458c9e4704a1c85ad9b09dbc66b8e53f1f: Status 404 returned error can't find the container with id cb4e87cb7328e56daa8df8c4ec84cd458c9e4704a1c85ad9b09dbc66b8e53f1f Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.561013 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c689555f6-jht44"] Dec 04 06:29:00 crc kubenswrapper[4832]: W1204 06:29:00.617926 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d4a9484_df22_4c06_bd80_b71c9b785d40.slice/crio-0239f2435af0e62c4f37dfcf7d3450c06564ac18145684751a5c10fb245e09e1 WatchSource:0}: Error finding container 0239f2435af0e62c4f37dfcf7d3450c06564ac18145684751a5c10fb245e09e1: Status 404 returned error can't find the container with id 0239f2435af0e62c4f37dfcf7d3450c06564ac18145684751a5c10fb245e09e1 Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.656020 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-847bcdcbb8-ph9ks" event={"ID":"a75235c9-c000-495b-92d7-797733f10601","Type":"ContainerStarted","Data":"a7a17bf7b493056322589f69617db630b8f97404c50c3f281ba17d6f15fdc906"} Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.668796 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ea69524-a2dc-4d81-8eaa-9a1c8935513e","Type":"ContainerStarted","Data":"dd9d8ee9288819007b035d406c355b5ab88c9bd8c05387391f09ef5c7452fb61"} Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.673772 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" event={"ID":"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034","Type":"ContainerStarted","Data":"cb4e87cb7328e56daa8df8c4ec84cd458c9e4704a1c85ad9b09dbc66b8e53f1f"} Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.685247 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.690293 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2","Type":"ContainerStarted","Data":"61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765"} Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.693870 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-847bcdcbb8-ph9ks" podStartSLOduration=24.693850074 podStartE2EDuration="24.693850074s" podCreationTimestamp="2025-12-04 06:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:00.692347497 +0000 UTC m=+1196.305165203" watchObservedRunningTime="2025-12-04 06:29:00.693850074 +0000 UTC m=+1196.306667780" Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.695288 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587db8c9db-9blcn" event={"ID":"a6361378-b3ff-41c4-a77e-3bb4a1482984","Type":"ContainerStarted","Data":"d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339"} Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.700880 4832 generic.go:334] "Generic (PLEG): container finished" podID="697880da-0a7d-46a5-9753-7c42d146f4d4" containerID="7b32382c7d17c44931cfc497b89e393af6f2aac8f4a2d163ea3f77badb3167cf" exitCode=0 Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.701930 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" event={"ID":"697880da-0a7d-46a5-9753-7c42d146f4d4","Type":"ContainerDied","Data":"7b32382c7d17c44931cfc497b89e393af6f2aac8f4a2d163ea3f77badb3167cf"} Dec 04 06:29:00 crc kubenswrapper[4832]: I1204 06:29:00.727825 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-587db8c9db-9blcn" podStartSLOduration=24.727798876 podStartE2EDuration="24.727798876s" podCreationTimestamp="2025-12-04 06:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:00.718603008 +0000 UTC m=+1196.331420714" watchObservedRunningTime="2025-12-04 06:29:00.727798876 +0000 UTC m=+1196.340616582" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.054808 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.235927 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.267706 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-ovsdbserver-sb\") pod \"697880da-0a7d-46a5-9753-7c42d146f4d4\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.267866 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-ovsdbserver-nb\") pod \"697880da-0a7d-46a5-9753-7c42d146f4d4\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.267959 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8wjv\" (UniqueName: \"kubernetes.io/projected/697880da-0a7d-46a5-9753-7c42d146f4d4-kube-api-access-r8wjv\") pod \"697880da-0a7d-46a5-9753-7c42d146f4d4\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.268121 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-config\") pod \"697880da-0a7d-46a5-9753-7c42d146f4d4\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.268190 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-dns-swift-storage-0\") pod \"697880da-0a7d-46a5-9753-7c42d146f4d4\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.268229 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-dns-svc\") pod \"697880da-0a7d-46a5-9753-7c42d146f4d4\" (UID: \"697880da-0a7d-46a5-9753-7c42d146f4d4\") " Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.354281 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "697880da-0a7d-46a5-9753-7c42d146f4d4" (UID: "697880da-0a7d-46a5-9753-7c42d146f4d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.354470 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697880da-0a7d-46a5-9753-7c42d146f4d4-kube-api-access-r8wjv" (OuterVolumeSpecName: "kube-api-access-r8wjv") pod "697880da-0a7d-46a5-9753-7c42d146f4d4" (UID: "697880da-0a7d-46a5-9753-7c42d146f4d4"). InnerVolumeSpecName "kube-api-access-r8wjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.372498 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.372532 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8wjv\" (UniqueName: \"kubernetes.io/projected/697880da-0a7d-46a5-9753-7c42d146f4d4-kube-api-access-r8wjv\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.378017 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-config" (OuterVolumeSpecName: "config") pod "697880da-0a7d-46a5-9753-7c42d146f4d4" (UID: "697880da-0a7d-46a5-9753-7c42d146f4d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.382457 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.389421 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "697880da-0a7d-46a5-9753-7c42d146f4d4" (UID: "697880da-0a7d-46a5-9753-7c42d146f4d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.465574 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "697880da-0a7d-46a5-9753-7c42d146f4d4" (UID: "697880da-0a7d-46a5-9753-7c42d146f4d4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.467462 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "697880da-0a7d-46a5-9753-7c42d146f4d4" (UID: "697880da-0a7d-46a5-9753-7c42d146f4d4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.474079 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.474112 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.474123 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.474138 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697880da-0a7d-46a5-9753-7c42d146f4d4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.483170 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.721946 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" event={"ID":"697880da-0a7d-46a5-9753-7c42d146f4d4","Type":"ContainerDied","Data":"6214721fcf836ba199cc17ae94ab9b07a0b86072fdc8d2504b4f6ca315175274"} Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.722012 4832 scope.go:117] "RemoveContainer" containerID="7b32382c7d17c44931cfc497b89e393af6f2aac8f4a2d163ea3f77badb3167cf" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.722145 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-vf7sm" Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.764722 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c689555f6-jht44" event={"ID":"2d4a9484-df22-4c06-bd80-b71c9b785d40","Type":"ContainerStarted","Data":"ce8bff225dacd20b8d02aed97dc869d6b47eea08e3c8f940389f1372a9a8fec0"} Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.764778 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c689555f6-jht44" event={"ID":"2d4a9484-df22-4c06-bd80-b71c9b785d40","Type":"ContainerStarted","Data":"0239f2435af0e62c4f37dfcf7d3450c06564ac18145684751a5c10fb245e09e1"} Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.768081 4832 generic.go:334] "Generic (PLEG): container finished" podID="d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" containerID="7a72f2c6fe967dc4d5861e9aeed27c73546af68f3f94b61f24925029bb594a48" exitCode=0 Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.768133 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" event={"ID":"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034","Type":"ContainerDied","Data":"7a72f2c6fe967dc4d5861e9aeed27c73546af68f3f94b61f24925029bb594a48"} Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.774784 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95a663f4-e6ab-4854-b1f0-3e98c6c39515","Type":"ContainerStarted","Data":"57d76c47b5b4bfd505932f8c766611e27f39f6a333d40d4c8d06ab75e10013cd"} Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.789673 4832 generic.go:334] "Generic (PLEG): container finished" podID="e43b67ac-4870-4632-a6a2-84db802b371a" containerID="912ef879e31ce80a1aba6c344f0f1cd2d3219d5f4850babbe664cc754b984bfa" exitCode=0 Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.790605 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-znj8j" event={"ID":"e43b67ac-4870-4632-a6a2-84db802b371a","Type":"ContainerDied","Data":"912ef879e31ce80a1aba6c344f0f1cd2d3219d5f4850babbe664cc754b984bfa"} Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.910133 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vf7sm"] Dec 04 06:29:01 crc kubenswrapper[4832]: I1204 06:29:01.916331 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vf7sm"] Dec 04 06:29:02 crc kubenswrapper[4832]: I1204 06:29:02.728315 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="697880da-0a7d-46a5-9753-7c42d146f4d4" path="/var/lib/kubelet/pods/697880da-0a7d-46a5-9753-7c42d146f4d4/volumes" Dec 04 06:29:02 crc kubenswrapper[4832]: I1204 06:29:02.839135 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ea69524-a2dc-4d81-8eaa-9a1c8935513e","Type":"ContainerStarted","Data":"7ccd2eeaeb5ab455b81dc377633d1a96db31dff154c2706514575abeeb08557c"} Dec 04 06:29:02 crc kubenswrapper[4832]: I1204 06:29:02.843762 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c689555f6-jht44" event={"ID":"2d4a9484-df22-4c06-bd80-b71c9b785d40","Type":"ContainerStarted","Data":"a200f72b817a2525eff1d166330bf820654a903e28cce83bc5e70d422431c528"} Dec 04 06:29:02 crc kubenswrapper[4832]: I1204 06:29:02.845301 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:29:02 crc kubenswrapper[4832]: I1204 06:29:02.860126 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" event={"ID":"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034","Type":"ContainerStarted","Data":"eb6da92065a082aef9a6fb09337d66ec92a2829b4e32af5eff0246c73e5d73b1"} Dec 04 06:29:02 crc kubenswrapper[4832]: I1204 06:29:02.860275 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:29:02 crc kubenswrapper[4832]: I1204 06:29:02.872774 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95a663f4-e6ab-4854-b1f0-3e98c6c39515","Type":"ContainerStarted","Data":"fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01"} Dec 04 06:29:02 crc kubenswrapper[4832]: I1204 06:29:02.884133 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c689555f6-jht44" podStartSLOduration=4.88409475 podStartE2EDuration="4.88409475s" podCreationTimestamp="2025-12-04 06:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:02.873371933 +0000 UTC m=+1198.486189629" watchObservedRunningTime="2025-12-04 06:29:02.88409475 +0000 UTC m=+1198.496912456" Dec 04 06:29:02 crc kubenswrapper[4832]: I1204 06:29:02.894183 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jggjz" event={"ID":"bac8c79c-e51d-4e52-a5d1-1f8472db13b1","Type":"ContainerStarted","Data":"2fdd0a583a96900f4953bd83e84fb9119e7628d8b7b1e0d3d5ae3e2389d88eca"} Dec 04 06:29:02 crc kubenswrapper[4832]: I1204 06:29:02.898694 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" podStartSLOduration=4.898659021 podStartE2EDuration="4.898659021s" podCreationTimestamp="2025-12-04 06:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:02.896930758 +0000 UTC m=+1198.509748474" watchObservedRunningTime="2025-12-04 06:29:02.898659021 +0000 UTC m=+1198.511476737" Dec 04 06:29:02 crc kubenswrapper[4832]: I1204 06:29:02.924754 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jggjz" podStartSLOduration=5.375432321 podStartE2EDuration="35.924725807s" podCreationTimestamp="2025-12-04 06:28:27 +0000 UTC" firstStartedPulling="2025-12-04 06:28:30.855840091 +0000 UTC m=+1166.468657797" lastFinishedPulling="2025-12-04 06:29:01.405133577 +0000 UTC m=+1197.017951283" observedRunningTime="2025-12-04 06:29:02.919919058 +0000 UTC m=+1198.532736784" watchObservedRunningTime="2025-12-04 06:29:02.924725807 +0000 UTC m=+1198.537543514" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.473881 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-znj8j" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.644068 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-scripts\") pod \"e43b67ac-4870-4632-a6a2-84db802b371a\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.644448 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-config-data\") pod \"e43b67ac-4870-4632-a6a2-84db802b371a\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.644496 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwjxr\" (UniqueName: \"kubernetes.io/projected/e43b67ac-4870-4632-a6a2-84db802b371a-kube-api-access-rwjxr\") pod \"e43b67ac-4870-4632-a6a2-84db802b371a\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.644671 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-combined-ca-bundle\") pod \"e43b67ac-4870-4632-a6a2-84db802b371a\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.644741 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e43b67ac-4870-4632-a6a2-84db802b371a-logs\") pod \"e43b67ac-4870-4632-a6a2-84db802b371a\" (UID: \"e43b67ac-4870-4632-a6a2-84db802b371a\") " Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.645622 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5bdbc57ff5-2cpdh"] Dec 04 06:29:03 crc kubenswrapper[4832]: E1204 06:29:03.646048 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697880da-0a7d-46a5-9753-7c42d146f4d4" containerName="init" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.646066 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="697880da-0a7d-46a5-9753-7c42d146f4d4" containerName="init" Dec 04 06:29:03 crc kubenswrapper[4832]: E1204 06:29:03.646086 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43b67ac-4870-4632-a6a2-84db802b371a" containerName="placement-db-sync" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.646094 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43b67ac-4870-4632-a6a2-84db802b371a" containerName="placement-db-sync" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.646302 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="697880da-0a7d-46a5-9753-7c42d146f4d4" containerName="init" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.646324 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e43b67ac-4870-4632-a6a2-84db802b371a" containerName="placement-db-sync" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.646594 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e43b67ac-4870-4632-a6a2-84db802b371a-logs" (OuterVolumeSpecName: "logs") pod "e43b67ac-4870-4632-a6a2-84db802b371a" (UID: "e43b67ac-4870-4632-a6a2-84db802b371a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.664869 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e43b67ac-4870-4632-a6a2-84db802b371a-kube-api-access-rwjxr" (OuterVolumeSpecName: "kube-api-access-rwjxr") pod "e43b67ac-4870-4632-a6a2-84db802b371a" (UID: "e43b67ac-4870-4632-a6a2-84db802b371a"). InnerVolumeSpecName "kube-api-access-rwjxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.667487 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bdbc57ff5-2cpdh"] Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.667527 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.669546 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-scripts" (OuterVolumeSpecName: "scripts") pod "e43b67ac-4870-4632-a6a2-84db802b371a" (UID: "e43b67ac-4870-4632-a6a2-84db802b371a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.671198 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.671441 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.759057 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwjxr\" (UniqueName: \"kubernetes.io/projected/e43b67ac-4870-4632-a6a2-84db802b371a-kube-api-access-rwjxr\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.759109 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e43b67ac-4870-4632-a6a2-84db802b371a-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.759132 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.785757 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e43b67ac-4870-4632-a6a2-84db802b371a" (UID: "e43b67ac-4870-4632-a6a2-84db802b371a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.796472 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-config-data" (OuterVolumeSpecName: "config-data") pod "e43b67ac-4870-4632-a6a2-84db802b371a" (UID: "e43b67ac-4870-4632-a6a2-84db802b371a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.862255 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-internal-tls-certs\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.862373 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-combined-ca-bundle\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.862428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-config\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.862456 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5c7h\" (UniqueName: \"kubernetes.io/projected/0864aed7-87aa-47d4-b38e-17d8863bb83e-kube-api-access-j5c7h\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.862523 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-ovndb-tls-certs\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.862576 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-public-tls-certs\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.863227 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-httpd-config\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.863356 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.863383 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43b67ac-4870-4632-a6a2-84db802b371a-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.912265 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95a663f4-e6ab-4854-b1f0-3e98c6c39515","Type":"ContainerStarted","Data":"730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf"} Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.912476 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="95a663f4-e6ab-4854-b1f0-3e98c6c39515" containerName="glance-log" containerID="cri-o://fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01" gracePeriod=30 Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.913278 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="95a663f4-e6ab-4854-b1f0-3e98c6c39515" containerName="glance-httpd" containerID="cri-o://730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf" gracePeriod=30 Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.932504 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-znj8j" event={"ID":"e43b67ac-4870-4632-a6a2-84db802b371a","Type":"ContainerDied","Data":"c3d95d584a70a2c5f8cdd4721fbfbdda7d4be764a78fd0c3336ae529cb5cd31c"} Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.932549 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3d95d584a70a2c5f8cdd4721fbfbdda7d4be764a78fd0c3336ae529cb5cd31c" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.932622 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-znj8j" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.949501 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ea69524-a2dc-4d81-8eaa-9a1c8935513e","Type":"ContainerStarted","Data":"3f09d80cbdc742c007223aab453790317075a28b3743090dce5ca727503c23c6"} Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.949761 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2ea69524-a2dc-4d81-8eaa-9a1c8935513e" containerName="glance-log" containerID="cri-o://7ccd2eeaeb5ab455b81dc377633d1a96db31dff154c2706514575abeeb08557c" gracePeriod=30 Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.950185 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2ea69524-a2dc-4d81-8eaa-9a1c8935513e" containerName="glance-httpd" containerID="cri-o://3f09d80cbdc742c007223aab453790317075a28b3743090dce5ca727503c23c6" gracePeriod=30 Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.951181 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.951169717 podStartE2EDuration="6.951169717s" podCreationTimestamp="2025-12-04 06:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:03.943734432 +0000 UTC m=+1199.556552138" watchObservedRunningTime="2025-12-04 06:29:03.951169717 +0000 UTC m=+1199.563987423" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.964572 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-ovndb-tls-certs\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.964635 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-public-tls-certs\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.964671 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-httpd-config\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.964732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-internal-tls-certs\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.964774 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-combined-ca-bundle\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.964803 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-config\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.964828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5c7h\" (UniqueName: \"kubernetes.io/projected/0864aed7-87aa-47d4-b38e-17d8863bb83e-kube-api-access-j5c7h\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.975238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-internal-tls-certs\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.982380 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-config\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.983158 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-public-tls-certs\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.985569 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-combined-ca-bundle\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.986070 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-httpd-config\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.986275 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0864aed7-87aa-47d4-b38e-17d8863bb83e-ovndb-tls-certs\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.990243 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5c7h\" (UniqueName: \"kubernetes.io/projected/0864aed7-87aa-47d4-b38e-17d8863bb83e-kube-api-access-j5c7h\") pod \"neutron-5bdbc57ff5-2cpdh\" (UID: \"0864aed7-87aa-47d4-b38e-17d8863bb83e\") " pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:03 crc kubenswrapper[4832]: I1204 06:29:03.994277 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.994256906 podStartE2EDuration="5.994256906s" podCreationTimestamp="2025-12-04 06:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:03.990585414 +0000 UTC m=+1199.603403120" watchObservedRunningTime="2025-12-04 06:29:03.994256906 +0000 UTC m=+1199.607074612" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.053963 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-56555b86cd-htxqh"] Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.055581 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.065674 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.066046 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.066316 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.066483 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bj74g" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.066875 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.067896 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-combined-ca-bundle\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.072576 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-internal-tls-certs\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.072724 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-config-data\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.072799 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6njd\" (UniqueName: \"kubernetes.io/projected/d282cab8-b359-4fc9-9f34-95b8b1984106-kube-api-access-p6njd\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.072957 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-scripts\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.073104 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-public-tls-certs\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.073365 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d282cab8-b359-4fc9-9f34-95b8b1984106-logs\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.102498 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56555b86cd-htxqh"] Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.103123 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.194498 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-config-data\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.194554 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6njd\" (UniqueName: \"kubernetes.io/projected/d282cab8-b359-4fc9-9f34-95b8b1984106-kube-api-access-p6njd\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.194598 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-scripts\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.194627 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-public-tls-certs\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.194680 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d282cab8-b359-4fc9-9f34-95b8b1984106-logs\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.194731 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-combined-ca-bundle\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.194777 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-internal-tls-certs\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.195878 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d282cab8-b359-4fc9-9f34-95b8b1984106-logs\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.203847 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-public-tls-certs\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.204196 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-combined-ca-bundle\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.204931 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-internal-tls-certs\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.205314 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-scripts\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.208999 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d282cab8-b359-4fc9-9f34-95b8b1984106-config-data\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.222029 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6njd\" (UniqueName: \"kubernetes.io/projected/d282cab8-b359-4fc9-9f34-95b8b1984106-kube-api-access-p6njd\") pod \"placement-56555b86cd-htxqh\" (UID: \"d282cab8-b359-4fc9-9f34-95b8b1984106\") " pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.435859 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.875300 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bdbc57ff5-2cpdh"] Dec 04 06:29:04 crc kubenswrapper[4832]: I1204 06:29:04.980261 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.009678 4832 generic.go:334] "Generic (PLEG): container finished" podID="95a663f4-e6ab-4854-b1f0-3e98c6c39515" containerID="730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf" exitCode=0 Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.010158 4832 generic.go:334] "Generic (PLEG): container finished" podID="95a663f4-e6ab-4854-b1f0-3e98c6c39515" containerID="fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01" exitCode=143 Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.010344 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95a663f4-e6ab-4854-b1f0-3e98c6c39515","Type":"ContainerDied","Data":"730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf"} Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.010474 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95a663f4-e6ab-4854-b1f0-3e98c6c39515","Type":"ContainerDied","Data":"fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01"} Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.010539 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"95a663f4-e6ab-4854-b1f0-3e98c6c39515","Type":"ContainerDied","Data":"57d76c47b5b4bfd505932f8c766611e27f39f6a333d40d4c8d06ab75e10013cd"} Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.010632 4832 scope.go:117] "RemoveContainer" containerID="730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.021114 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.022917 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a663f4-e6ab-4854-b1f0-3e98c6c39515-logs\") pod \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.023584 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95a663f4-e6ab-4854-b1f0-3e98c6c39515-httpd-run\") pod \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.023760 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.023887 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zqgn\" (UniqueName: \"kubernetes.io/projected/95a663f4-e6ab-4854-b1f0-3e98c6c39515-kube-api-access-5zqgn\") pod \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.024056 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-config-data\") pod \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.024182 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-scripts\") pod \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.024288 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-combined-ca-bundle\") pod \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\" (UID: \"95a663f4-e6ab-4854-b1f0-3e98c6c39515\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.025391 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a663f4-e6ab-4854-b1f0-3e98c6c39515-logs" (OuterVolumeSpecName: "logs") pod "95a663f4-e6ab-4854-b1f0-3e98c6c39515" (UID: "95a663f4-e6ab-4854-b1f0-3e98c6c39515"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.025758 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a663f4-e6ab-4854-b1f0-3e98c6c39515-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.026717 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a663f4-e6ab-4854-b1f0-3e98c6c39515-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "95a663f4-e6ab-4854-b1f0-3e98c6c39515" (UID: "95a663f4-e6ab-4854-b1f0-3e98c6c39515"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.026859 4832 generic.go:334] "Generic (PLEG): container finished" podID="2ea69524-a2dc-4d81-8eaa-9a1c8935513e" containerID="3f09d80cbdc742c007223aab453790317075a28b3743090dce5ca727503c23c6" exitCode=0 Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.026887 4832 generic.go:334] "Generic (PLEG): container finished" podID="2ea69524-a2dc-4d81-8eaa-9a1c8935513e" containerID="7ccd2eeaeb5ab455b81dc377633d1a96db31dff154c2706514575abeeb08557c" exitCode=143 Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.026944 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ea69524-a2dc-4d81-8eaa-9a1c8935513e","Type":"ContainerDied","Data":"3f09d80cbdc742c007223aab453790317075a28b3743090dce5ca727503c23c6"} Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.026977 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ea69524-a2dc-4d81-8eaa-9a1c8935513e","Type":"ContainerDied","Data":"7ccd2eeaeb5ab455b81dc377633d1a96db31dff154c2706514575abeeb08557c"} Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.029407 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdbc57ff5-2cpdh" event={"ID":"0864aed7-87aa-47d4-b38e-17d8863bb83e","Type":"ContainerStarted","Data":"34edc0febdd34ceade219c53f2646eb61ec2b1ee8d9793be3e05e6f8a954a6f0"} Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.041832 4832 generic.go:334] "Generic (PLEG): container finished" podID="28ae9519-5721-4fbb-87b1-3b215638adaf" containerID="0a8620eccdd41a2c81ab4d36e60b401e7aae2336bf55e67c8b07672226d3c4f8" exitCode=0 Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.043257 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nm7x4" event={"ID":"28ae9519-5721-4fbb-87b1-3b215638adaf","Type":"ContainerDied","Data":"0a8620eccdd41a2c81ab4d36e60b401e7aae2336bf55e67c8b07672226d3c4f8"} Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.056545 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a663f4-e6ab-4854-b1f0-3e98c6c39515-kube-api-access-5zqgn" (OuterVolumeSpecName: "kube-api-access-5zqgn") pod "95a663f4-e6ab-4854-b1f0-3e98c6c39515" (UID: "95a663f4-e6ab-4854-b1f0-3e98c6c39515"). InnerVolumeSpecName "kube-api-access-5zqgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.058609 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-scripts" (OuterVolumeSpecName: "scripts") pod "95a663f4-e6ab-4854-b1f0-3e98c6c39515" (UID: "95a663f4-e6ab-4854-b1f0-3e98c6c39515"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.082786 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "95a663f4-e6ab-4854-b1f0-3e98c6c39515" (UID: "95a663f4-e6ab-4854-b1f0-3e98c6c39515"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.087063 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95a663f4-e6ab-4854-b1f0-3e98c6c39515" (UID: "95a663f4-e6ab-4854-b1f0-3e98c6c39515"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.117558 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-config-data" (OuterVolumeSpecName: "config-data") pod "95a663f4-e6ab-4854-b1f0-3e98c6c39515" (UID: "95a663f4-e6ab-4854-b1f0-3e98c6c39515"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.130375 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95a663f4-e6ab-4854-b1f0-3e98c6c39515-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.130916 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.130933 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zqgn\" (UniqueName: \"kubernetes.io/projected/95a663f4-e6ab-4854-b1f0-3e98c6c39515-kube-api-access-5zqgn\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.130952 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.130964 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.130975 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a663f4-e6ab-4854-b1f0-3e98c6c39515-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.153126 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56555b86cd-htxqh"] Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.212941 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.223385 4832 scope.go:117] "RemoveContainer" containerID="fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.236177 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.291707 4832 scope.go:117] "RemoveContainer" containerID="730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf" Dec 04 06:29:05 crc kubenswrapper[4832]: E1204 06:29:05.292348 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf\": container with ID starting with 730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf not found: ID does not exist" containerID="730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.292409 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf"} err="failed to get container status \"730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf\": rpc error: code = NotFound desc = could not find container \"730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf\": container with ID starting with 730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf not found: ID does not exist" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.292445 4832 scope.go:117] "RemoveContainer" containerID="fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01" Dec 04 06:29:05 crc kubenswrapper[4832]: E1204 06:29:05.294217 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01\": container with ID starting with fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01 not found: ID does not exist" containerID="fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.294263 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01"} err="failed to get container status \"fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01\": rpc error: code = NotFound desc = could not find container \"fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01\": container with ID starting with fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01 not found: ID does not exist" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.294286 4832 scope.go:117] "RemoveContainer" containerID="730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.295611 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf"} err="failed to get container status \"730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf\": rpc error: code = NotFound desc = could not find container \"730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf\": container with ID starting with 730758d990681b07e3fdb7d8e04f6c5731b823d565a29e03ebdc1a4fc81b5adf not found: ID does not exist" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.295631 4832 scope.go:117] "RemoveContainer" containerID="fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.296120 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01"} err="failed to get container status \"fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01\": rpc error: code = NotFound desc = could not find container \"fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01\": container with ID starting with fa00e4f2bb818d613172bc4f039c30518f110bf2ea794e99678ceeaf88d6aa01 not found: ID does not exist" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.587551 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.633804 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.673468 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.718633 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:05 crc kubenswrapper[4832]: E1204 06:29:05.721528 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a663f4-e6ab-4854-b1f0-3e98c6c39515" containerName="glance-log" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.721577 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a663f4-e6ab-4854-b1f0-3e98c6c39515" containerName="glance-log" Dec 04 06:29:05 crc kubenswrapper[4832]: E1204 06:29:05.721592 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a663f4-e6ab-4854-b1f0-3e98c6c39515" containerName="glance-httpd" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.721602 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a663f4-e6ab-4854-b1f0-3e98c6c39515" containerName="glance-httpd" Dec 04 06:29:05 crc kubenswrapper[4832]: E1204 06:29:05.721633 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea69524-a2dc-4d81-8eaa-9a1c8935513e" containerName="glance-log" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.721643 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea69524-a2dc-4d81-8eaa-9a1c8935513e" containerName="glance-log" Dec 04 06:29:05 crc kubenswrapper[4832]: E1204 06:29:05.721673 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea69524-a2dc-4d81-8eaa-9a1c8935513e" containerName="glance-httpd" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.721683 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea69524-a2dc-4d81-8eaa-9a1c8935513e" containerName="glance-httpd" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.721915 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a663f4-e6ab-4854-b1f0-3e98c6c39515" containerName="glance-httpd" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.721929 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea69524-a2dc-4d81-8eaa-9a1c8935513e" containerName="glance-log" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.721969 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea69524-a2dc-4d81-8eaa-9a1c8935513e" containerName="glance-httpd" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.721982 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a663f4-e6ab-4854-b1f0-3e98c6c39515" containerName="glance-log" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.723558 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.727672 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.747506 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.747600 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.762700 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-logs\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.762822 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.763012 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6d22\" (UniqueName: \"kubernetes.io/projected/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-kube-api-access-r6d22\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.763044 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.763186 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.763210 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.763249 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.763302 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.864548 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-logs\") pod \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.864601 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lprhd\" (UniqueName: \"kubernetes.io/projected/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-kube-api-access-lprhd\") pod \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.865517 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-logs" (OuterVolumeSpecName: "logs") pod "2ea69524-a2dc-4d81-8eaa-9a1c8935513e" (UID: "2ea69524-a2dc-4d81-8eaa-9a1c8935513e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.865624 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-httpd-run\") pod \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.865721 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-combined-ca-bundle\") pod \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.865810 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.866011 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-config-data\") pod \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.866129 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-scripts\") pod \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\" (UID: \"2ea69524-a2dc-4d81-8eaa-9a1c8935513e\") " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.866447 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2ea69524-a2dc-4d81-8eaa-9a1c8935513e" (UID: "2ea69524-a2dc-4d81-8eaa-9a1c8935513e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.866529 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-logs\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.866605 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.866792 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6d22\" (UniqueName: \"kubernetes.io/projected/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-kube-api-access-r6d22\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.866815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.867096 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.867124 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.867165 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.867216 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.867423 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.867442 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.873151 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-logs\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.873495 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.878943 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.874292 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "2ea69524-a2dc-4d81-8eaa-9a1c8935513e" (UID: "2ea69524-a2dc-4d81-8eaa-9a1c8935513e"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.903590 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-kube-api-access-lprhd" (OuterVolumeSpecName: "kube-api-access-lprhd") pod "2ea69524-a2dc-4d81-8eaa-9a1c8935513e" (UID: "2ea69524-a2dc-4d81-8eaa-9a1c8935513e"). InnerVolumeSpecName "kube-api-access-lprhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.912702 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6d22\" (UniqueName: \"kubernetes.io/projected/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-kube-api-access-r6d22\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.917304 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.917639 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-scripts" (OuterVolumeSpecName: "scripts") pod "2ea69524-a2dc-4d81-8eaa-9a1c8935513e" (UID: "2ea69524-a2dc-4d81-8eaa-9a1c8935513e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.917884 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.921438 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.922069 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.936289 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.963318 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ea69524-a2dc-4d81-8eaa-9a1c8935513e" (UID: "2ea69524-a2dc-4d81-8eaa-9a1c8935513e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.970047 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.970166 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.970189 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:05 crc kubenswrapper[4832]: I1204 06:29:05.970201 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lprhd\" (UniqueName: \"kubernetes.io/projected/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-kube-api-access-lprhd\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.003019 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.008620 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-config-data" (OuterVolumeSpecName: "config-data") pod "2ea69524-a2dc-4d81-8eaa-9a1c8935513e" (UID: "2ea69524-a2dc-4d81-8eaa-9a1c8935513e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.073770 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea69524-a2dc-4d81-8eaa-9a1c8935513e-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.073803 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.084983 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56555b86cd-htxqh" event={"ID":"d282cab8-b359-4fc9-9f34-95b8b1984106","Type":"ContainerStarted","Data":"f5085d130e4e8718f187eefcba9e770fa23b505bab1646e4c9412b9149ea260f"} Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.085057 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56555b86cd-htxqh" event={"ID":"d282cab8-b359-4fc9-9f34-95b8b1984106","Type":"ContainerStarted","Data":"1a32e3d0501109878fb9cd760d222374a93a267e9529460b88a355257741b07e"} Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.087612 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ea69524-a2dc-4d81-8eaa-9a1c8935513e","Type":"ContainerDied","Data":"dd9d8ee9288819007b035d406c355b5ab88c9bd8c05387391f09ef5c7452fb61"} Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.087662 4832 scope.go:117] "RemoveContainer" containerID="3f09d80cbdc742c007223aab453790317075a28b3743090dce5ca727503c23c6" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.087770 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.093881 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.105613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdbc57ff5-2cpdh" event={"ID":"0864aed7-87aa-47d4-b38e-17d8863bb83e","Type":"ContainerStarted","Data":"ad354f9ac9db7bf3510866631972546f2bd0bbef44060831a1ca17b68663b041"} Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.157576 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.175562 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.181675 4832 scope.go:117] "RemoveContainer" containerID="7ccd2eeaeb5ab455b81dc377633d1a96db31dff154c2706514575abeeb08557c" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.204477 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.206328 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.209671 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.209967 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.255501 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.384927 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.385469 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.385524 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.385549 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wvvv\" (UniqueName: \"kubernetes.io/projected/875b6361-178d-40f6-b4d0-328dd939c7c1-kube-api-access-5wvvv\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.385573 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.385600 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/875b6361-178d-40f6-b4d0-328dd939c7c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.385629 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.385907 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/875b6361-178d-40f6-b4d0-328dd939c7c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.487942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/875b6361-178d-40f6-b4d0-328dd939c7c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.488052 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.488080 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.488119 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.488142 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wvvv\" (UniqueName: \"kubernetes.io/projected/875b6361-178d-40f6-b4d0-328dd939c7c1-kube-api-access-5wvvv\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.488164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.488186 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/875b6361-178d-40f6-b4d0-328dd939c7c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.488210 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.489220 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.489529 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/875b6361-178d-40f6-b4d0-328dd939c7c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.489588 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/875b6361-178d-40f6-b4d0-328dd939c7c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.502109 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.509480 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.510278 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.512363 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.520358 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wvvv\" (UniqueName: \"kubernetes.io/projected/875b6361-178d-40f6-b4d0-328dd939c7c1-kube-api-access-5wvvv\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.623618 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.682921 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.683013 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.726043 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea69524-a2dc-4d81-8eaa-9a1c8935513e" path="/var/lib/kubelet/pods/2ea69524-a2dc-4d81-8eaa-9a1c8935513e/volumes" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.727947 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a663f4-e6ab-4854-b1f0-3e98c6c39515" path="/var/lib/kubelet/pods/95a663f4-e6ab-4854-b1f0-3e98c6c39515/volumes" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.744074 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.752139 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.834062 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.861103 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.861690 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.904215 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-combined-ca-bundle\") pod \"28ae9519-5721-4fbb-87b1-3b215638adaf\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.904460 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-credential-keys\") pod \"28ae9519-5721-4fbb-87b1-3b215638adaf\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.904509 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-config-data\") pod \"28ae9519-5721-4fbb-87b1-3b215638adaf\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.904599 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cssb6\" (UniqueName: \"kubernetes.io/projected/28ae9519-5721-4fbb-87b1-3b215638adaf-kube-api-access-cssb6\") pod \"28ae9519-5721-4fbb-87b1-3b215638adaf\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.904699 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-scripts\") pod \"28ae9519-5721-4fbb-87b1-3b215638adaf\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.904750 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-fernet-keys\") pod \"28ae9519-5721-4fbb-87b1-3b215638adaf\" (UID: \"28ae9519-5721-4fbb-87b1-3b215638adaf\") " Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.910193 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-scripts" (OuterVolumeSpecName: "scripts") pod "28ae9519-5721-4fbb-87b1-3b215638adaf" (UID: "28ae9519-5721-4fbb-87b1-3b215638adaf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.910679 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "28ae9519-5721-4fbb-87b1-3b215638adaf" (UID: "28ae9519-5721-4fbb-87b1-3b215638adaf"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.911648 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "28ae9519-5721-4fbb-87b1-3b215638adaf" (UID: "28ae9519-5721-4fbb-87b1-3b215638adaf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.914020 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ae9519-5721-4fbb-87b1-3b215638adaf-kube-api-access-cssb6" (OuterVolumeSpecName: "kube-api-access-cssb6") pod "28ae9519-5721-4fbb-87b1-3b215638adaf" (UID: "28ae9519-5721-4fbb-87b1-3b215638adaf"). InnerVolumeSpecName "kube-api-access-cssb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.939585 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-config-data" (OuterVolumeSpecName: "config-data") pod "28ae9519-5721-4fbb-87b1-3b215638adaf" (UID: "28ae9519-5721-4fbb-87b1-3b215638adaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:06 crc kubenswrapper[4832]: I1204 06:29:06.960787 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28ae9519-5721-4fbb-87b1-3b215638adaf" (UID: "28ae9519-5721-4fbb-87b1-3b215638adaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.008178 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cssb6\" (UniqueName: \"kubernetes.io/projected/28ae9519-5721-4fbb-87b1-3b215638adaf-kube-api-access-cssb6\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.008229 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.008242 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.008256 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.008272 4832 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.008284 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ae9519-5721-4fbb-87b1-3b215638adaf-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.161092 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nm7x4" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.161343 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nm7x4" event={"ID":"28ae9519-5721-4fbb-87b1-3b215638adaf","Type":"ContainerDied","Data":"099f7006b860bd5f2390f0821acddc52f925b88fe6cef08aa31b3b0c1f06ac28"} Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.161501 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="099f7006b860bd5f2390f0821acddc52f925b88fe6cef08aa31b3b0c1f06ac28" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.169150 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5","Type":"ContainerStarted","Data":"87eff2cf29fe4f7bb3e85f379971d0280db4ae19b21b5a16205775fe5bfba62b"} Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.211279 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5fb459446f-clqb5"] Dec 04 06:29:07 crc kubenswrapper[4832]: E1204 06:29:07.211736 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ae9519-5721-4fbb-87b1-3b215638adaf" containerName="keystone-bootstrap" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.211748 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ae9519-5721-4fbb-87b1-3b215638adaf" containerName="keystone-bootstrap" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.211950 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ae9519-5721-4fbb-87b1-3b215638adaf" containerName="keystone-bootstrap" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.216067 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.219987 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.220305 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.220310 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.220533 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.220769 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4k75v" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.220945 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.259850 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5fb459446f-clqb5"] Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.314496 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwbjj\" (UniqueName: \"kubernetes.io/projected/dceba324-ac23-407c-ac0d-7ca4abce124d-kube-api-access-hwbjj\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.314572 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-credential-keys\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.314652 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-fernet-keys\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.314816 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-combined-ca-bundle\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.314890 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-internal-tls-certs\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.314916 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-scripts\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.314995 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-public-tls-certs\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.315071 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-config-data\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.390769 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.418320 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-public-tls-certs\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.418460 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-config-data\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.418527 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwbjj\" (UniqueName: \"kubernetes.io/projected/dceba324-ac23-407c-ac0d-7ca4abce124d-kube-api-access-hwbjj\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.418563 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-credential-keys\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.418592 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-fernet-keys\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.418629 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-combined-ca-bundle\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.418669 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-internal-tls-certs\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.418693 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-scripts\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.428531 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-fernet-keys\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.431085 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-credential-keys\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.439280 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-scripts\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.439725 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-config-data\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.440328 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-public-tls-certs\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.440522 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-combined-ca-bundle\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.446227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dceba324-ac23-407c-ac0d-7ca4abce124d-internal-tls-certs\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.460321 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwbjj\" (UniqueName: \"kubernetes.io/projected/dceba324-ac23-407c-ac0d-7ca4abce124d-kube-api-access-hwbjj\") pod \"keystone-5fb459446f-clqb5\" (UID: \"dceba324-ac23-407c-ac0d-7ca4abce124d\") " pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:07 crc kubenswrapper[4832]: I1204 06:29:07.560173 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:08 crc kubenswrapper[4832]: I1204 06:29:08.162827 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5fb459446f-clqb5"] Dec 04 06:29:08 crc kubenswrapper[4832]: I1204 06:29:08.193126 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56555b86cd-htxqh" event={"ID":"d282cab8-b359-4fc9-9f34-95b8b1984106","Type":"ContainerStarted","Data":"053b034c1e2bd65ffd2515aaf1ff5e9e7c86666b35e281d09c269d6b24e9f3a4"} Dec 04 06:29:08 crc kubenswrapper[4832]: I1204 06:29:08.193999 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:08 crc kubenswrapper[4832]: I1204 06:29:08.194067 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:08 crc kubenswrapper[4832]: I1204 06:29:08.198748 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdbc57ff5-2cpdh" event={"ID":"0864aed7-87aa-47d4-b38e-17d8863bb83e","Type":"ContainerStarted","Data":"895163250cd3d006be2531483472c76fc0dbbac60512ff1f81985657a6b019d6"} Dec 04 06:29:08 crc kubenswrapper[4832]: I1204 06:29:08.200199 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:08 crc kubenswrapper[4832]: I1204 06:29:08.203193 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5","Type":"ContainerStarted","Data":"72840d5c7c99eb8eccb62d3df0b75b1b453c84818474041310afb94270f70700"} Dec 04 06:29:08 crc kubenswrapper[4832]: I1204 06:29:08.205013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"875b6361-178d-40f6-b4d0-328dd939c7c1","Type":"ContainerStarted","Data":"c1e23bf6b79f5904799eb8761c1e8f19c22113b29f3170f0f4d7496e938b57b6"} Dec 04 06:29:08 crc kubenswrapper[4832]: I1204 06:29:08.228045 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-56555b86cd-htxqh" podStartSLOduration=4.228007307 podStartE2EDuration="4.228007307s" podCreationTimestamp="2025-12-04 06:29:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:08.218815429 +0000 UTC m=+1203.831633135" watchObservedRunningTime="2025-12-04 06:29:08.228007307 +0000 UTC m=+1203.840825013" Dec 04 06:29:08 crc kubenswrapper[4832]: I1204 06:29:08.264042 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5bdbc57ff5-2cpdh" podStartSLOduration=5.264003449 podStartE2EDuration="5.264003449s" podCreationTimestamp="2025-12-04 06:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:08.244099726 +0000 UTC m=+1203.856917432" watchObservedRunningTime="2025-12-04 06:29:08.264003449 +0000 UTC m=+1203.876821155" Dec 04 06:29:09 crc kubenswrapper[4832]: I1204 06:29:09.014530 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:29:09 crc kubenswrapper[4832]: I1204 06:29:09.081536 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-m66jf"] Dec 04 06:29:09 crc kubenswrapper[4832]: I1204 06:29:09.081807 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" podUID="f61de78c-0748-4b52-bff7-26132bd7179c" containerName="dnsmasq-dns" containerID="cri-o://07761db0fd6cd535af8520f86f18797c62fc70757ffe4205a343fa48de717863" gracePeriod=10 Dec 04 06:29:09 crc kubenswrapper[4832]: I1204 06:29:09.223985 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"875b6361-178d-40f6-b4d0-328dd939c7c1","Type":"ContainerStarted","Data":"205dc7e0e8a170782eaced157939013eb65c23f71592c0afdd68a0bea3ecc85e"} Dec 04 06:29:09 crc kubenswrapper[4832]: I1204 06:29:09.226769 4832 generic.go:334] "Generic (PLEG): container finished" podID="f61de78c-0748-4b52-bff7-26132bd7179c" containerID="07761db0fd6cd535af8520f86f18797c62fc70757ffe4205a343fa48de717863" exitCode=0 Dec 04 06:29:09 crc kubenswrapper[4832]: I1204 06:29:09.226813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" event={"ID":"f61de78c-0748-4b52-bff7-26132bd7179c","Type":"ContainerDied","Data":"07761db0fd6cd535af8520f86f18797c62fc70757ffe4205a343fa48de717863"} Dec 04 06:29:09 crc kubenswrapper[4832]: I1204 06:29:09.231280 4832 generic.go:334] "Generic (PLEG): container finished" podID="bac8c79c-e51d-4e52-a5d1-1f8472db13b1" containerID="2fdd0a583a96900f4953bd83e84fb9119e7628d8b7b1e0d3d5ae3e2389d88eca" exitCode=0 Dec 04 06:29:09 crc kubenswrapper[4832]: I1204 06:29:09.232088 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jggjz" event={"ID":"bac8c79c-e51d-4e52-a5d1-1f8472db13b1","Type":"ContainerDied","Data":"2fdd0a583a96900f4953bd83e84fb9119e7628d8b7b1e0d3d5ae3e2389d88eca"} Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.329627 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" event={"ID":"f61de78c-0748-4b52-bff7-26132bd7179c","Type":"ContainerDied","Data":"20d5e6c93269e18e09169a8059d26ea50b4baee15fc3584ab22300f6e3961733"} Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.330150 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d5e6c93269e18e09169a8059d26ea50b4baee15fc3584ab22300f6e3961733" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.331898 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jggjz" event={"ID":"bac8c79c-e51d-4e52-a5d1-1f8472db13b1","Type":"ContainerDied","Data":"e045b31671ddc39ac8bab5d3cb398e89ae1ce34cbee03c881d0d9c64a52f9a33"} Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.331942 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e045b31671ddc39ac8bab5d3cb398e89ae1ce34cbee03c881d0d9c64a52f9a33" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.339111 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5fb459446f-clqb5" event={"ID":"dceba324-ac23-407c-ac0d-7ca4abce124d","Type":"ContainerStarted","Data":"222449d9fd05aeb2e028be3cb6909b0b1a2aa85e46b1357ab05b48163e4c7360"} Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.400316 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jggjz" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.409845 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.511177 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-dns-svc\") pod \"f61de78c-0748-4b52-bff7-26132bd7179c\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.511235 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-ovsdbserver-sb\") pod \"f61de78c-0748-4b52-bff7-26132bd7179c\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.511266 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-combined-ca-bundle\") pod \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\" (UID: \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\") " Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.511300 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn4bz\" (UniqueName: \"kubernetes.io/projected/f61de78c-0748-4b52-bff7-26132bd7179c-kube-api-access-dn4bz\") pod \"f61de78c-0748-4b52-bff7-26132bd7179c\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.511343 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-config\") pod \"f61de78c-0748-4b52-bff7-26132bd7179c\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.513529 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-dns-swift-storage-0\") pod \"f61de78c-0748-4b52-bff7-26132bd7179c\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.513632 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqs6d\" (UniqueName: \"kubernetes.io/projected/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-kube-api-access-gqs6d\") pod \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\" (UID: \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\") " Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.513697 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-ovsdbserver-nb\") pod \"f61de78c-0748-4b52-bff7-26132bd7179c\" (UID: \"f61de78c-0748-4b52-bff7-26132bd7179c\") " Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.513791 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-db-sync-config-data\") pod \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\" (UID: \"bac8c79c-e51d-4e52-a5d1-1f8472db13b1\") " Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.523665 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f61de78c-0748-4b52-bff7-26132bd7179c-kube-api-access-dn4bz" (OuterVolumeSpecName: "kube-api-access-dn4bz") pod "f61de78c-0748-4b52-bff7-26132bd7179c" (UID: "f61de78c-0748-4b52-bff7-26132bd7179c"). InnerVolumeSpecName "kube-api-access-dn4bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.527235 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-kube-api-access-gqs6d" (OuterVolumeSpecName: "kube-api-access-gqs6d") pod "bac8c79c-e51d-4e52-a5d1-1f8472db13b1" (UID: "bac8c79c-e51d-4e52-a5d1-1f8472db13b1"). InnerVolumeSpecName "kube-api-access-gqs6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.530474 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bac8c79c-e51d-4e52-a5d1-1f8472db13b1" (UID: "bac8c79c-e51d-4e52-a5d1-1f8472db13b1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.615468 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn4bz\" (UniqueName: \"kubernetes.io/projected/f61de78c-0748-4b52-bff7-26132bd7179c-kube-api-access-dn4bz\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.615500 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqs6d\" (UniqueName: \"kubernetes.io/projected/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-kube-api-access-gqs6d\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.615512 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.791943 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bac8c79c-e51d-4e52-a5d1-1f8472db13b1" (UID: "bac8c79c-e51d-4e52-a5d1-1f8472db13b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.818074 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f61de78c-0748-4b52-bff7-26132bd7179c" (UID: "f61de78c-0748-4b52-bff7-26132bd7179c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.819577 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.819605 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac8c79c-e51d-4e52-a5d1-1f8472db13b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.864190 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-config" (OuterVolumeSpecName: "config") pod "f61de78c-0748-4b52-bff7-26132bd7179c" (UID: "f61de78c-0748-4b52-bff7-26132bd7179c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.867883 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f61de78c-0748-4b52-bff7-26132bd7179c" (UID: "f61de78c-0748-4b52-bff7-26132bd7179c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.885848 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f61de78c-0748-4b52-bff7-26132bd7179c" (UID: "f61de78c-0748-4b52-bff7-26132bd7179c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.896204 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f61de78c-0748-4b52-bff7-26132bd7179c" (UID: "f61de78c-0748-4b52-bff7-26132bd7179c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.920764 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.920789 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.920798 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:13 crc kubenswrapper[4832]: I1204 06:29:13.920809 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f61de78c-0748-4b52-bff7-26132bd7179c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.349935 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5fb459446f-clqb5" event={"ID":"dceba324-ac23-407c-ac0d-7ca4abce124d","Type":"ContainerStarted","Data":"f95ad67e6be68830047bdccf12284926c11635774eeed36109a02d456fcc6730"} Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.351030 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.352285 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2","Type":"ContainerStarted","Data":"a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3"} Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.354136 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"875b6361-178d-40f6-b4d0-328dd939c7c1","Type":"ContainerStarted","Data":"df25075882d3aa12187faabeadf1bdaf9bc0b4998db66a662db347de86b50993"} Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.356115 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-m66jf" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.358814 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5","Type":"ContainerStarted","Data":"2fed1ff76351fbaebeafd36d90b0d252502fa2ebd49439af7909080730f623f7"} Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.358876 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jggjz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.394213 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5fb459446f-clqb5" podStartSLOduration=7.394189519 podStartE2EDuration="7.394189519s" podCreationTimestamp="2025-12-04 06:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:14.368954093 +0000 UTC m=+1209.981771799" watchObservedRunningTime="2025-12-04 06:29:14.394189519 +0000 UTC m=+1210.007007225" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.447179 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.447163102 podStartE2EDuration="9.447163102s" podCreationTimestamp="2025-12-04 06:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:14.415645031 +0000 UTC m=+1210.028462747" watchObservedRunningTime="2025-12-04 06:29:14.447163102 +0000 UTC m=+1210.059980798" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.453894 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.453868119 podStartE2EDuration="8.453868119s" podCreationTimestamp="2025-12-04 06:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:14.436820016 +0000 UTC m=+1210.049637722" watchObservedRunningTime="2025-12-04 06:29:14.453868119 +0000 UTC m=+1210.066685825" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.481786 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-m66jf"] Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.491484 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-m66jf"] Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.640878 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6955d5c798-vn8dg"] Dec 04 06:29:14 crc kubenswrapper[4832]: E1204 06:29:14.641460 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac8c79c-e51d-4e52-a5d1-1f8472db13b1" containerName="barbican-db-sync" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.641477 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac8c79c-e51d-4e52-a5d1-1f8472db13b1" containerName="barbican-db-sync" Dec 04 06:29:14 crc kubenswrapper[4832]: E1204 06:29:14.641493 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61de78c-0748-4b52-bff7-26132bd7179c" containerName="init" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.641500 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61de78c-0748-4b52-bff7-26132bd7179c" containerName="init" Dec 04 06:29:14 crc kubenswrapper[4832]: E1204 06:29:14.641524 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61de78c-0748-4b52-bff7-26132bd7179c" containerName="dnsmasq-dns" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.641532 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61de78c-0748-4b52-bff7-26132bd7179c" containerName="dnsmasq-dns" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.641736 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61de78c-0748-4b52-bff7-26132bd7179c" containerName="dnsmasq-dns" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.641774 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac8c79c-e51d-4e52-a5d1-1f8472db13b1" containerName="barbican-db-sync" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.643109 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.645654 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jjfxl" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.647842 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.650333 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.661906 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-76cc4f7d9f-s989p"] Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.668681 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.686897 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.688984 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6955d5c798-vn8dg"] Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.707537 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76cc4f7d9f-s989p"] Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.742082 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f61de78c-0748-4b52-bff7-26132bd7179c" path="/var/lib/kubelet/pods/f61de78c-0748-4b52-bff7-26132bd7179c/volumes" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.752226 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cn5t\" (UniqueName: \"kubernetes.io/projected/94c9f353-9085-4009-b151-3d5f9418148e-kube-api-access-7cn5t\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.752307 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94c9f353-9085-4009-b151-3d5f9418148e-logs\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.752451 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94c9f353-9085-4009-b151-3d5f9418148e-config-data\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.752508 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94c9f353-9085-4009-b151-3d5f9418148e-combined-ca-bundle\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.752535 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94c9f353-9085-4009-b151-3d5f9418148e-config-data-custom\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.761649 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-87ghz"] Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.763829 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.797206 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-87ghz"] Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854210 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b753665-a7f4-4c62-b4ee-a0842bbbe487-config-data-custom\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854303 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b753665-a7f4-4c62-b4ee-a0842bbbe487-combined-ca-bundle\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854329 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94c9f353-9085-4009-b151-3d5f9418148e-config-data\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94c9f353-9085-4009-b151-3d5f9418148e-combined-ca-bundle\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854373 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94c9f353-9085-4009-b151-3d5f9418148e-config-data-custom\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854419 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-dns-svc\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854687 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854713 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46ksb\" (UniqueName: \"kubernetes.io/projected/1c1f0f22-b600-4323-9799-b0d2125a8ce7-kube-api-access-46ksb\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854808 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b753665-a7f4-4c62-b4ee-a0842bbbe487-config-data\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854842 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854873 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b753665-a7f4-4c62-b4ee-a0842bbbe487-logs\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854903 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cn5t\" (UniqueName: \"kubernetes.io/projected/94c9f353-9085-4009-b151-3d5f9418148e-kube-api-access-7cn5t\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgpqx\" (UniqueName: \"kubernetes.io/projected/4b753665-a7f4-4c62-b4ee-a0842bbbe487-kube-api-access-hgpqx\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854949 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94c9f353-9085-4009-b151-3d5f9418148e-logs\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.854980 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-config\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.855000 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.855591 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94c9f353-9085-4009-b151-3d5f9418148e-logs\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.860930 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94c9f353-9085-4009-b151-3d5f9418148e-config-data-custom\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.874679 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94c9f353-9085-4009-b151-3d5f9418148e-config-data\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.878881 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94c9f353-9085-4009-b151-3d5f9418148e-combined-ca-bundle\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.896212 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cn5t\" (UniqueName: \"kubernetes.io/projected/94c9f353-9085-4009-b151-3d5f9418148e-kube-api-access-7cn5t\") pod \"barbican-keystone-listener-6955d5c798-vn8dg\" (UID: \"94c9f353-9085-4009-b151-3d5f9418148e\") " pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.956865 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.956928 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46ksb\" (UniqueName: \"kubernetes.io/projected/1c1f0f22-b600-4323-9799-b0d2125a8ce7-kube-api-access-46ksb\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.956963 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b753665-a7f4-4c62-b4ee-a0842bbbe487-config-data\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.956996 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.957029 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b753665-a7f4-4c62-b4ee-a0842bbbe487-logs\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.957060 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgpqx\" (UniqueName: \"kubernetes.io/projected/4b753665-a7f4-4c62-b4ee-a0842bbbe487-kube-api-access-hgpqx\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.957101 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-config\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.957118 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.957140 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b753665-a7f4-4c62-b4ee-a0842bbbe487-config-data-custom\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.957181 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b753665-a7f4-4c62-b4ee-a0842bbbe487-combined-ca-bundle\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.957210 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-dns-svc\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.957201 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-64fd7ddbcd-4qjbk"] Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.957568 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b753665-a7f4-4c62-b4ee-a0842bbbe487-logs\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.958337 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.959069 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.959827 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.960180 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.960813 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-config\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.961641 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-dns-svc\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.963694 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.966280 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b753665-a7f4-4c62-b4ee-a0842bbbe487-config-data\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.966674 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b753665-a7f4-4c62-b4ee-a0842bbbe487-config-data-custom\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.966954 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.968735 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b753665-a7f4-4c62-b4ee-a0842bbbe487-combined-ca-bundle\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.988048 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64fd7ddbcd-4qjbk"] Dec 04 06:29:14 crc kubenswrapper[4832]: I1204 06:29:14.992186 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46ksb\" (UniqueName: \"kubernetes.io/projected/1c1f0f22-b600-4323-9799-b0d2125a8ce7-kube-api-access-46ksb\") pod \"dnsmasq-dns-85ff748b95-87ghz\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.011940 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgpqx\" (UniqueName: \"kubernetes.io/projected/4b753665-a7f4-4c62-b4ee-a0842bbbe487-kube-api-access-hgpqx\") pod \"barbican-worker-76cc4f7d9f-s989p\" (UID: \"4b753665-a7f4-4c62-b4ee-a0842bbbe487\") " pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.058438 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-config-data\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.058480 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-combined-ca-bundle\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.058510 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deac54af-5a7d-4356-9af2-911f17ab4129-logs\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.058553 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhsxv\" (UniqueName: \"kubernetes.io/projected/deac54af-5a7d-4356-9af2-911f17ab4129-kube-api-access-lhsxv\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.058988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-config-data-custom\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.105301 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.161071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-config-data-custom\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.161147 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-config-data\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.161177 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-combined-ca-bundle\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.161206 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deac54af-5a7d-4356-9af2-911f17ab4129-logs\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.161252 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhsxv\" (UniqueName: \"kubernetes.io/projected/deac54af-5a7d-4356-9af2-911f17ab4129-kube-api-access-lhsxv\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.168745 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-combined-ca-bundle\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.170474 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-config-data\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.171017 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-config-data-custom\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.171458 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deac54af-5a7d-4356-9af2-911f17ab4129-logs\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.189071 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhsxv\" (UniqueName: \"kubernetes.io/projected/deac54af-5a7d-4356-9af2-911f17ab4129-kube-api-access-lhsxv\") pod \"barbican-api-64fd7ddbcd-4qjbk\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.292682 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76cc4f7d9f-s989p" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.375086 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.460444 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mxwh7" event={"ID":"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1","Type":"ContainerStarted","Data":"376e49d0330626302f245ceb643cb87e52e1ef208a5250a390ec14b6dc9cb5c9"} Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.497428 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-mxwh7" podStartSLOduration=5.976381197 podStartE2EDuration="48.497411402s" podCreationTimestamp="2025-12-04 06:28:27 +0000 UTC" firstStartedPulling="2025-12-04 06:28:30.849063243 +0000 UTC m=+1166.461880939" lastFinishedPulling="2025-12-04 06:29:13.370093448 +0000 UTC m=+1208.982911144" observedRunningTime="2025-12-04 06:29:15.485288142 +0000 UTC m=+1211.098105848" watchObservedRunningTime="2025-12-04 06:29:15.497411402 +0000 UTC m=+1211.110229108" Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.528667 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6955d5c798-vn8dg"] Dec 04 06:29:15 crc kubenswrapper[4832]: I1204 06:29:15.818111 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-87ghz"] Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.009118 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76cc4f7d9f-s989p"] Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.095051 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.095108 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.119994 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64fd7ddbcd-4qjbk"] Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.165599 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.197914 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.487348 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" event={"ID":"deac54af-5a7d-4356-9af2-911f17ab4129","Type":"ContainerStarted","Data":"11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d"} Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.487756 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" event={"ID":"deac54af-5a7d-4356-9af2-911f17ab4129","Type":"ContainerStarted","Data":"31271d54ed658b9cb5f4b9c318fe490528e1b427ed32bc89cd52613fcdbb8d17"} Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.491454 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76cc4f7d9f-s989p" event={"ID":"4b753665-a7f4-4c62-b4ee-a0842bbbe487","Type":"ContainerStarted","Data":"1b7f18becc4e7ec13757a682be3c21f5402be05e367574428746cfa1a3858260"} Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.499151 4832 generic.go:334] "Generic (PLEG): container finished" podID="1c1f0f22-b600-4323-9799-b0d2125a8ce7" containerID="da9296cae1b1b66f3c5558fc5956e067dfdd3d8158583fdf4d637ad8746d7d5a" exitCode=0 Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.499219 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" event={"ID":"1c1f0f22-b600-4323-9799-b0d2125a8ce7","Type":"ContainerDied","Data":"da9296cae1b1b66f3c5558fc5956e067dfdd3d8158583fdf4d637ad8746d7d5a"} Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.499246 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" event={"ID":"1c1f0f22-b600-4323-9799-b0d2125a8ce7","Type":"ContainerStarted","Data":"470b9a50152e90f6f24cf3564fb81c4d66f7bcb0dac928d12264355aa1a8ff40"} Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.505631 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" event={"ID":"94c9f353-9085-4009-b151-3d5f9418148e","Type":"ContainerStarted","Data":"1598f452b5a7e53a7c3b45a13be5f4f2b11d386eb26d21b0704379f1b0596ed5"} Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.506217 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.506239 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.685522 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-587db8c9db-9blcn" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.835126 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.835177 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.865155 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-847bcdcbb8-ph9ks" podUID="a75235c9-c000-495b-92d7-797733f10601" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.889226 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:16 crc kubenswrapper[4832]: I1204 06:29:16.963319 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.519947 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" event={"ID":"deac54af-5a7d-4356-9af2-911f17ab4129","Type":"ContainerStarted","Data":"31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00"} Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.521637 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.532708 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" event={"ID":"1c1f0f22-b600-4323-9799-b0d2125a8ce7","Type":"ContainerStarted","Data":"0f79130ad9c9f4734d9097aa97a410105810e0a2a04c14679cba498f1f1a5e03"} Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.533488 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.533512 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.533522 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.553902 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" podStartSLOduration=3.5538857999999998 podStartE2EDuration="3.5538858s" podCreationTimestamp="2025-12-04 06:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:17.548462245 +0000 UTC m=+1213.161279961" watchObservedRunningTime="2025-12-04 06:29:17.5538858 +0000 UTC m=+1213.166703506" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.575760 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" podStartSLOduration=3.5757374520000003 podStartE2EDuration="3.575737452s" podCreationTimestamp="2025-12-04 06:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:17.569009675 +0000 UTC m=+1213.181827381" watchObservedRunningTime="2025-12-04 06:29:17.575737452 +0000 UTC m=+1213.188555158" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.873007 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5869d975cd-47z8d"] Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.874895 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.878615 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.878651 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.894681 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5869d975cd-47z8d"] Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.920564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgbh7\" (UniqueName: \"kubernetes.io/projected/26ad016c-9e7b-49bd-9031-830f8319a79d-kube-api-access-wgbh7\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.920832 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-config-data\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.920897 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-internal-tls-certs\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.920985 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-combined-ca-bundle\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.921029 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-config-data-custom\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.921142 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-public-tls-certs\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:17 crc kubenswrapper[4832]: I1204 06:29:17.921196 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ad016c-9e7b-49bd-9031-830f8319a79d-logs\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.024648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-public-tls-certs\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.024730 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ad016c-9e7b-49bd-9031-830f8319a79d-logs\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.025249 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ad016c-9e7b-49bd-9031-830f8319a79d-logs\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.025369 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgbh7\" (UniqueName: \"kubernetes.io/projected/26ad016c-9e7b-49bd-9031-830f8319a79d-kube-api-access-wgbh7\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.027752 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-config-data\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.027863 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-internal-tls-certs\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.028379 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-combined-ca-bundle\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.028506 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-config-data-custom\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.032925 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-public-tls-certs\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.037962 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-internal-tls-certs\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.040319 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-combined-ca-bundle\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.042891 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-config-data\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.046056 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26ad016c-9e7b-49bd-9031-830f8319a79d-config-data-custom\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.048119 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgbh7\" (UniqueName: \"kubernetes.io/projected/26ad016c-9e7b-49bd-9031-830f8319a79d-kube-api-access-wgbh7\") pod \"barbican-api-5869d975cd-47z8d\" (UID: \"26ad016c-9e7b-49bd-9031-830f8319a79d\") " pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.202039 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:18 crc kubenswrapper[4832]: I1204 06:29:18.557265 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.121179 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5869d975cd-47z8d"] Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.580327 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" event={"ID":"94c9f353-9085-4009-b151-3d5f9418148e","Type":"ContainerStarted","Data":"37d863ed1c65d3b55c5f0d63c83ca3d1366788d8fe7b26e4fc65dd270c70eba8"} Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.580450 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" event={"ID":"94c9f353-9085-4009-b151-3d5f9418148e","Type":"ContainerStarted","Data":"5d9f2098d43a27302d434f1143f77927ad79e499f22fbd16e095b3a5e1f6bd02"} Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.586592 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76cc4f7d9f-s989p" event={"ID":"4b753665-a7f4-4c62-b4ee-a0842bbbe487","Type":"ContainerStarted","Data":"1ef234e6bbee02bf8dcb6b213a585c4c6b9d4f99933659386d8a85d1ad3092a7"} Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.586631 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76cc4f7d9f-s989p" event={"ID":"4b753665-a7f4-4c62-b4ee-a0842bbbe487","Type":"ContainerStarted","Data":"45ffe1a9c0654e6c2d48a824e4c7c3810806e07152e8c24222c0b09d1b93e666"} Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.592749 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5869d975cd-47z8d" event={"ID":"26ad016c-9e7b-49bd-9031-830f8319a79d","Type":"ContainerStarted","Data":"a7ed28e7b0d6e6806bc0e76d355bc4b4b39a096a38a2560fe722092ba4b5613b"} Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.592822 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5869d975cd-47z8d" event={"ID":"26ad016c-9e7b-49bd-9031-830f8319a79d","Type":"ContainerStarted","Data":"98eed2c5afe98ea8bcd6ea8f6b47ffe1bf0b34a35b2106445a62155d254d6496"} Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.592846 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.592857 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.592865 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5869d975cd-47z8d" event={"ID":"26ad016c-9e7b-49bd-9031-830f8319a79d","Type":"ContainerStarted","Data":"369a4e3ffcdabb1e39445e439237e3b703517591cec910ccd15a44f0f23acc74"} Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.601384 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6955d5c798-vn8dg" podStartSLOduration=2.694844443 podStartE2EDuration="5.601370535s" podCreationTimestamp="2025-12-04 06:29:14 +0000 UTC" firstStartedPulling="2025-12-04 06:29:15.639482896 +0000 UTC m=+1211.252300602" lastFinishedPulling="2025-12-04 06:29:18.546008988 +0000 UTC m=+1214.158826694" observedRunningTime="2025-12-04 06:29:19.600285478 +0000 UTC m=+1215.213103184" watchObservedRunningTime="2025-12-04 06:29:19.601370535 +0000 UTC m=+1215.214188241" Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.646449 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5869d975cd-47z8d" podStartSLOduration=2.646423382 podStartE2EDuration="2.646423382s" podCreationTimestamp="2025-12-04 06:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:19.621383811 +0000 UTC m=+1215.234201537" watchObservedRunningTime="2025-12-04 06:29:19.646423382 +0000 UTC m=+1215.259241088" Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.657972 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-76cc4f7d9f-s989p" podStartSLOduration=3.132800276 podStartE2EDuration="5.657951538s" podCreationTimestamp="2025-12-04 06:29:14 +0000 UTC" firstStartedPulling="2025-12-04 06:29:16.019719268 +0000 UTC m=+1211.632536974" lastFinishedPulling="2025-12-04 06:29:18.54487054 +0000 UTC m=+1214.157688236" observedRunningTime="2025-12-04 06:29:19.654849501 +0000 UTC m=+1215.267667217" watchObservedRunningTime="2025-12-04 06:29:19.657951538 +0000 UTC m=+1215.270769244" Dec 04 06:29:19 crc kubenswrapper[4832]: I1204 06:29:19.831138 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 04 06:29:20 crc kubenswrapper[4832]: I1204 06:29:20.950069 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:21 crc kubenswrapper[4832]: I1204 06:29:21.613160 4832 generic.go:334] "Generic (PLEG): container finished" podID="0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" containerID="376e49d0330626302f245ceb643cb87e52e1ef208a5250a390ec14b6dc9cb5c9" exitCode=0 Dec 04 06:29:21 crc kubenswrapper[4832]: I1204 06:29:21.613203 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mxwh7" event={"ID":"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1","Type":"ContainerDied","Data":"376e49d0330626302f245ceb643cb87e52e1ef208a5250a390ec14b6dc9cb5c9"} Dec 04 06:29:22 crc kubenswrapper[4832]: I1204 06:29:22.200591 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:22 crc kubenswrapper[4832]: I1204 06:29:22.655421 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 04 06:29:25 crc kubenswrapper[4832]: I1204 06:29:25.107568 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:25 crc kubenswrapper[4832]: I1204 06:29:25.174270 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-8z88f"] Dec 04 06:29:25 crc kubenswrapper[4832]: I1204 06:29:25.174784 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" podUID="d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" containerName="dnsmasq-dns" containerID="cri-o://eb6da92065a082aef9a6fb09337d66ec92a2829b4e32af5eff0246c73e5d73b1" gracePeriod=10 Dec 04 06:29:25 crc kubenswrapper[4832]: I1204 06:29:25.659246 4832 generic.go:334] "Generic (PLEG): container finished" podID="d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" containerID="eb6da92065a082aef9a6fb09337d66ec92a2829b4e32af5eff0246c73e5d73b1" exitCode=0 Dec 04 06:29:25 crc kubenswrapper[4832]: I1204 06:29:25.659422 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" event={"ID":"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034","Type":"ContainerDied","Data":"eb6da92065a082aef9a6fb09337d66ec92a2829b4e32af5eff0246c73e5d73b1"} Dec 04 06:29:26 crc kubenswrapper[4832]: I1204 06:29:26.682935 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-587db8c9db-9blcn" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 04 06:29:26 crc kubenswrapper[4832]: I1204 06:29:26.861655 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-847bcdcbb8-ph9ks" podUID="a75235c9-c000-495b-92d7-797733f10601" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.105388 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.138229 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.211470 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.260787 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-config-data\") pod \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.260942 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-scripts\") pod \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.260963 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-combined-ca-bundle\") pod \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.260988 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-db-sync-config-data\") pod \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.261133 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhd7h\" (UniqueName: \"kubernetes.io/projected/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-kube-api-access-fhd7h\") pod \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.261161 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-etc-machine-id\") pod \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\" (UID: \"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1\") " Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.261623 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" (UID: "0f50b7d2-4e8d-4905-85ec-811cdd3c60d1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.269328 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" (UID: "0f50b7d2-4e8d-4905-85ec-811cdd3c60d1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.269611 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-scripts" (OuterVolumeSpecName: "scripts") pod "0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" (UID: "0f50b7d2-4e8d-4905-85ec-811cdd3c60d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.279534 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-kube-api-access-fhd7h" (OuterVolumeSpecName: "kube-api-access-fhd7h") pod "0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" (UID: "0f50b7d2-4e8d-4905-85ec-811cdd3c60d1"). InnerVolumeSpecName "kube-api-access-fhd7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.320635 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" (UID: "0f50b7d2-4e8d-4905-85ec-811cdd3c60d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.390546 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-config-data" (OuterVolumeSpecName: "config-data") pod "0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" (UID: "0f50b7d2-4e8d-4905-85ec-811cdd3c60d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.392859 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhd7h\" (UniqueName: \"kubernetes.io/projected/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-kube-api-access-fhd7h\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.392904 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.392917 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.392930 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.392941 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.392953 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.685012 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mxwh7" event={"ID":"0f50b7d2-4e8d-4905-85ec-811cdd3c60d1","Type":"ContainerDied","Data":"d903aedc2fe384bf8fc637d67c628efc8b4639ea2676f059c1a3666ce15cd1b7"} Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.685056 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mxwh7" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.685064 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d903aedc2fe384bf8fc637d67c628efc8b4639ea2676f059c1a3666ce15cd1b7" Dec 04 06:29:27 crc kubenswrapper[4832]: I1204 06:29:27.919492 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.005482 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-ovsdbserver-sb\") pod \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.005650 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-ovsdbserver-nb\") pod \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.006232 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-config\") pod \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.006656 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx7mq\" (UniqueName: \"kubernetes.io/projected/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-kube-api-access-tx7mq\") pod \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.006717 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-dns-swift-storage-0\") pod \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.006744 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-dns-svc\") pod \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\" (UID: \"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034\") " Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.026885 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-kube-api-access-tx7mq" (OuterVolumeSpecName: "kube-api-access-tx7mq") pod "d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" (UID: "d7e622fe-ec6f-4ae8-bac9-0f3a5109c034"). InnerVolumeSpecName "kube-api-access-tx7mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.067992 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" (UID: "d7e622fe-ec6f-4ae8-bac9-0f3a5109c034"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.077687 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" (UID: "d7e622fe-ec6f-4ae8-bac9-0f3a5109c034"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.077925 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" (UID: "d7e622fe-ec6f-4ae8-bac9-0f3a5109c034"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.088907 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" (UID: "d7e622fe-ec6f-4ae8-bac9-0f3a5109c034"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.090674 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-config" (OuterVolumeSpecName: "config") pod "d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" (UID: "d7e622fe-ec6f-4ae8-bac9-0f3a5109c034"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.111663 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx7mq\" (UniqueName: \"kubernetes.io/projected/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-kube-api-access-tx7mq\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.111692 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.111703 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.111713 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.111724 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.111734 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:28 crc kubenswrapper[4832]: E1204 06:29:28.295815 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.469505 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 06:29:28 crc kubenswrapper[4832]: E1204 06:29:28.470609 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" containerName="init" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.470632 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" containerName="init" Dec 04 06:29:28 crc kubenswrapper[4832]: E1204 06:29:28.470661 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" containerName="dnsmasq-dns" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.470669 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" containerName="dnsmasq-dns" Dec 04 06:29:28 crc kubenswrapper[4832]: E1204 06:29:28.470695 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" containerName="cinder-db-sync" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.470705 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" containerName="cinder-db-sync" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.470935 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" containerName="dnsmasq-dns" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.470955 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" containerName="cinder-db-sync" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.472045 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.483260 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j4csp" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.483320 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.484187 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.484427 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.495714 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.564440 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-958hb"] Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.573925 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.585999 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-958hb"] Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.622565 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkpf4\" (UniqueName: \"kubernetes.io/projected/d27350cb-2b8b-4f39-bc71-a7efd2d56004-kube-api-access-tkpf4\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.622645 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.622699 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-scripts\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.622818 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.622839 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-config-data\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.622879 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d27350cb-2b8b-4f39-bc71-a7efd2d56004-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.725861 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.725908 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.725945 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-scripts\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.725988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.726041 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.726059 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-config-data\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.726079 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd62x\" (UniqueName: \"kubernetes.io/projected/cd410c96-2fee-471a-8807-257ea9328e20-kube-api-access-qd62x\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.726103 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-config\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.726121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.726138 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d27350cb-2b8b-4f39-bc71-a7efd2d56004-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.726162 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkpf4\" (UniqueName: \"kubernetes.io/projected/d27350cb-2b8b-4f39-bc71-a7efd2d56004-kube-api-access-tkpf4\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.726189 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.731569 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.731635 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d27350cb-2b8b-4f39-bc71-a7efd2d56004-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.733338 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.735140 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.738301 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-8z88f" event={"ID":"d7e622fe-ec6f-4ae8-bac9-0f3a5109c034","Type":"ContainerDied","Data":"cb4e87cb7328e56daa8df8c4ec84cd458c9e4704a1c85ad9b09dbc66b8e53f1f"} Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.738352 4832 scope.go:117] "RemoveContainer" containerID="eb6da92065a082aef9a6fb09337d66ec92a2829b4e32af5eff0246c73e5d73b1" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.741673 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-scripts\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.759664 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-config-data\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: W1204 06:29:28.771050 4832 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f50b7d2_4e8d_4905_85ec_811cdd3c60d1.slice/crio-conmon-376e49d0330626302f245ceb643cb87e52e1ef208a5250a390ec14b6dc9cb5c9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f50b7d2_4e8d_4905_85ec_811cdd3c60d1.slice/crio-conmon-376e49d0330626302f245ceb643cb87e52e1ef208a5250a390ec14b6dc9cb5c9.scope: no such file or directory Dec 04 06:29:28 crc kubenswrapper[4832]: W1204 06:29:28.771111 4832 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f50b7d2_4e8d_4905_85ec_811cdd3c60d1.slice/crio-376e49d0330626302f245ceb643cb87e52e1ef208a5250a390ec14b6dc9cb5c9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f50b7d2_4e8d_4905_85ec_811cdd3c60d1.slice/crio-376e49d0330626302f245ceb643cb87e52e1ef208a5250a390ec14b6dc9cb5c9.scope: no such file or directory Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.787650 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2","Type":"ContainerStarted","Data":"98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1"} Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.787869 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerName="ceilometer-notification-agent" containerID="cri-o://61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765" gracePeriod=30 Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.788158 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.789236 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerName="proxy-httpd" containerID="cri-o://98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1" gracePeriod=30 Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.789360 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerName="sg-core" containerID="cri-o://a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3" gracePeriod=30 Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.796652 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkpf4\" (UniqueName: \"kubernetes.io/projected/d27350cb-2b8b-4f39-bc71-a7efd2d56004-kube-api-access-tkpf4\") pod \"cinder-scheduler-0\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.798948 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.799895 4832 generic.go:334] "Generic (PLEG): container finished" podID="c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" containerID="6aa66e15069995f67c76bbfb6d21b690ba298f03c0cb13174464ecce3141cfca" exitCode=137 Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.799924 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4c9bd8c5-dspfj" event={"ID":"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8","Type":"ContainerDied","Data":"6aa66e15069995f67c76bbfb6d21b690ba298f03c0cb13174464ecce3141cfca"} Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.807997 4832 scope.go:117] "RemoveContainer" containerID="7a72f2c6fe967dc4d5861e9aeed27c73546af68f3f94b61f24925029bb594a48" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.834860 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.835096 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.835298 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd62x\" (UniqueName: \"kubernetes.io/projected/cd410c96-2fee-471a-8807-257ea9328e20-kube-api-access-qd62x\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.835354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-config\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.835391 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.835481 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.835919 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.836235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.836614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-config\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.837797 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.839189 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.864084 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd62x\" (UniqueName: \"kubernetes.io/projected/cd410c96-2fee-471a-8807-257ea9328e20-kube-api-access-qd62x\") pod \"dnsmasq-dns-5c9776ccc5-958hb\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.903134 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.924188 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.931772 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.935724 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 04 06:29:28 crc kubenswrapper[4832]: I1204 06:29:28.956191 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 04 06:29:28 crc kubenswrapper[4832]: W1204 06:29:28.982482 4832 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe54a9ec_6e1c_4745_95df_4c56a07ce2f2.slice/crio-a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3.scope/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe54a9ec_6e1c_4745_95df_4c56a07ce2f2.slice/crio-a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3.scope/cpuset.cpus.effective: no such device Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.045337 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcr4g\" (UniqueName: \"kubernetes.io/projected/88b8fceb-4889-40d2-99cb-23e8ffe00a81-kube-api-access-rcr4g\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.045458 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-config-data\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.045482 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-scripts\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.045511 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.045544 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88b8fceb-4889-40d2-99cb-23e8ffe00a81-etc-machine-id\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.045602 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-config-data-custom\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.045649 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88b8fceb-4889-40d2-99cb-23e8ffe00a81-logs\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.057458 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-8z88f"] Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.082736 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-8z88f"] Dec 04 06:29:29 crc kubenswrapper[4832]: W1204 06:29:29.099664 4832 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe54a9ec_6e1c_4745_95df_4c56a07ce2f2.slice/crio-conmon-98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe54a9ec_6e1c_4745_95df_4c56a07ce2f2.slice/crio-conmon-98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1.scope: no such file or directory Dec 04 06:29:29 crc kubenswrapper[4832]: W1204 06:29:29.099749 4832 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe54a9ec_6e1c_4745_95df_4c56a07ce2f2.slice/crio-98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe54a9ec_6e1c_4745_95df_4c56a07ce2f2.slice/crio-98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1.scope: no such file or directory Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.148631 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-config-data\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.148985 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-scripts\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.149020 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.149058 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88b8fceb-4889-40d2-99cb-23e8ffe00a81-etc-machine-id\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.149113 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-config-data-custom\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.149162 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88b8fceb-4889-40d2-99cb-23e8ffe00a81-logs\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.149216 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcr4g\" (UniqueName: \"kubernetes.io/projected/88b8fceb-4889-40d2-99cb-23e8ffe00a81-kube-api-access-rcr4g\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.154225 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88b8fceb-4889-40d2-99cb-23e8ffe00a81-etc-machine-id\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.157925 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88b8fceb-4889-40d2-99cb-23e8ffe00a81-logs\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.172152 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-config-data-custom\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.181023 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-scripts\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.183006 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-config-data\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.194247 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcr4g\" (UniqueName: \"kubernetes.io/projected/88b8fceb-4889-40d2-99cb-23e8ffe00a81-kube-api-access-rcr4g\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.207902 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.231817 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:29:29 crc kubenswrapper[4832]: E1204 06:29:29.430346 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ae9519_5721_4fbb_87b1_3b215638adaf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbac8c79c_e51d_4e52_a5d1_1f8472db13b1.slice/crio-conmon-2fdd0a583a96900f4953bd83e84fb9119e7628d8b7b1e0d3d5ae3e2389d88eca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbac8c79c_e51d_4e52_a5d1_1f8472db13b1.slice/crio-2fdd0a583a96900f4953bd83e84fb9119e7628d8b7b1e0d3d5ae3e2389d88eca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf61de78c_0748_4b52_bff7_26132bd7179c.slice/crio-20d5e6c93269e18e09169a8059d26ea50b4baee15fc3584ab22300f6e3961733\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbac8c79c_e51d_4e52_a5d1_1f8472db13b1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf61de78c_0748_4b52_bff7_26132bd7179c.slice/crio-conmon-07761db0fd6cd535af8520f86f18797c62fc70757ffe4205a343fa48de717863.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7e622fe_ec6f_4ae8_bac9_0f3a5109c034.slice/crio-cb4e87cb7328e56daa8df8c4ec84cd458c9e4704a1c85ad9b09dbc66b8e53f1f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ae9519_5721_4fbb_87b1_3b215638adaf.slice/crio-099f7006b860bd5f2390f0821acddc52f925b88fe6cef08aa31b3b0c1f06ac28\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2a34b59_464e_4a39_9f7a_c4ffe98f53f8.slice/crio-627119cc4d430756dd1761c5990b5a4d00fa85542b31eaf6ede412bc3955cd51.scope\": RecentStats: unable to find data in memory cache]" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.434276 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.763739 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-958hb"] Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.851207 4832 generic.go:334] "Generic (PLEG): container finished" podID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerID="98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1" exitCode=0 Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.851550 4832 generic.go:334] "Generic (PLEG): container finished" podID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerID="a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3" exitCode=2 Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.851595 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2","Type":"ContainerDied","Data":"98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1"} Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.851624 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2","Type":"ContainerDied","Data":"a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3"} Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.869843 4832 generic.go:334] "Generic (PLEG): container finished" podID="c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" containerID="627119cc4d430756dd1761c5990b5a4d00fa85542b31eaf6ede412bc3955cd51" exitCode=137 Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.869915 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4c9bd8c5-dspfj" event={"ID":"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8","Type":"ContainerDied","Data":"627119cc4d430756dd1761c5990b5a4d00fa85542b31eaf6ede412bc3955cd51"} Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.871712 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.881084 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" event={"ID":"cd410c96-2fee-471a-8807-257ea9328e20","Type":"ContainerStarted","Data":"d37f111298ac4e4de81d29f635676cde2a3d14e53ac2ed8df9416775812f2a7e"} Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.889915 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.986155 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr2x7\" (UniqueName: \"kubernetes.io/projected/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-kube-api-access-mr2x7\") pod \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.986225 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-config-data\") pod \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.986304 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-scripts\") pod \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.986458 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-horizon-secret-key\") pod \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.986657 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-logs\") pod \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\" (UID: \"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8\") " Dec 04 06:29:29 crc kubenswrapper[4832]: I1204 06:29:29.987716 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-logs" (OuterVolumeSpecName: "logs") pod "c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" (UID: "c2a34b59-464e-4a39-9f7a-c4ffe98f53f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.009563 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-kube-api-access-mr2x7" (OuterVolumeSpecName: "kube-api-access-mr2x7") pod "c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" (UID: "c2a34b59-464e-4a39-9f7a-c4ffe98f53f8"). InnerVolumeSpecName "kube-api-access-mr2x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.013068 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" (UID: "c2a34b59-464e-4a39-9f7a-c4ffe98f53f8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.041725 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-config-data" (OuterVolumeSpecName: "config-data") pod "c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" (UID: "c2a34b59-464e-4a39-9f7a-c4ffe98f53f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.062976 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-scripts" (OuterVolumeSpecName: "scripts") pod "c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" (UID: "c2a34b59-464e-4a39-9f7a-c4ffe98f53f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.092989 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.093027 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr2x7\" (UniqueName: \"kubernetes.io/projected/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-kube-api-access-mr2x7\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.093038 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.093047 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.093056 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.160666 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 04 06:29:30 crc kubenswrapper[4832]: W1204 06:29:30.162088 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88b8fceb_4889_40d2_99cb_23e8ffe00a81.slice/crio-5011f3ace2fab2d4d20abeef90dd0c8634b988e7194722ff3c2c82c3fb7b4aab WatchSource:0}: Error finding container 5011f3ace2fab2d4d20abeef90dd0c8634b988e7194722ff3c2c82c3fb7b4aab: Status 404 returned error can't find the container with id 5011f3ace2fab2d4d20abeef90dd0c8634b988e7194722ff3c2c82c3fb7b4aab Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.732784 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e622fe-ec6f-4ae8-bac9-0f3a5109c034" path="/var/lib/kubelet/pods/d7e622fe-ec6f-4ae8-bac9-0f3a5109c034/volumes" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.924610 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.927768 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"88b8fceb-4889-40d2-99cb-23e8ffe00a81","Type":"ContainerStarted","Data":"5011f3ace2fab2d4d20abeef90dd0c8634b988e7194722ff3c2c82c3fb7b4aab"} Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.930048 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b4c9bd8c5-dspfj" event={"ID":"c2a34b59-464e-4a39-9f7a-c4ffe98f53f8","Type":"ContainerDied","Data":"d0229f7eebc323adba1fac4fbc0b9239b2e9b49a3547ed8fdaf14668c60ba714"} Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.930079 4832 scope.go:117] "RemoveContainer" containerID="627119cc4d430756dd1761c5990b5a4d00fa85542b31eaf6ede412bc3955cd51" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.930184 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b4c9bd8c5-dspfj" Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.944329 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d27350cb-2b8b-4f39-bc71-a7efd2d56004","Type":"ContainerStarted","Data":"93068733f4d9025e6dd824643e0a219aee3ceceb9086dc271e12f1b05d5ac87b"} Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.947379 4832 generic.go:334] "Generic (PLEG): container finished" podID="cd410c96-2fee-471a-8807-257ea9328e20" containerID="abc07c133cee71c5fd35acf629bf73d747585ec1cec97c6c06e14575507a7f43" exitCode=0 Dec 04 06:29:30 crc kubenswrapper[4832]: I1204 06:29:30.947422 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" event={"ID":"cd410c96-2fee-471a-8807-257ea9328e20","Type":"ContainerDied","Data":"abc07c133cee71c5fd35acf629bf73d747585ec1cec97c6c06e14575507a7f43"} Dec 04 06:29:31 crc kubenswrapper[4832]: I1204 06:29:31.065929 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b4c9bd8c5-dspfj"] Dec 04 06:29:31 crc kubenswrapper[4832]: I1204 06:29:31.167345 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b4c9bd8c5-dspfj"] Dec 04 06:29:31 crc kubenswrapper[4832]: I1204 06:29:31.257042 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 04 06:29:31 crc kubenswrapper[4832]: I1204 06:29:31.326448 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5869d975cd-47z8d" Dec 04 06:29:31 crc kubenswrapper[4832]: I1204 06:29:31.383214 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64fd7ddbcd-4qjbk"] Dec 04 06:29:31 crc kubenswrapper[4832]: I1204 06:29:31.383570 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" podUID="deac54af-5a7d-4356-9af2-911f17ab4129" containerName="barbican-api-log" containerID="cri-o://11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d" gracePeriod=30 Dec 04 06:29:31 crc kubenswrapper[4832]: I1204 06:29:31.384173 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" podUID="deac54af-5a7d-4356-9af2-911f17ab4129" containerName="barbican-api" containerID="cri-o://31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00" gracePeriod=30 Dec 04 06:29:31 crc kubenswrapper[4832]: I1204 06:29:31.430594 4832 scope.go:117] "RemoveContainer" containerID="6aa66e15069995f67c76bbfb6d21b690ba298f03c0cb13174464ecce3141cfca" Dec 04 06:29:31 crc kubenswrapper[4832]: I1204 06:29:31.981569 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" event={"ID":"cd410c96-2fee-471a-8807-257ea9328e20","Type":"ContainerStarted","Data":"2af4948344d09db3fc13ce47a313ebb0004d542e2f844841473bb1b5a12693d1"} Dec 04 06:29:31 crc kubenswrapper[4832]: I1204 06:29:31.984845 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:31 crc kubenswrapper[4832]: I1204 06:29:31.993862 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"88b8fceb-4889-40d2-99cb-23e8ffe00a81","Type":"ContainerStarted","Data":"8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4"} Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.001106 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d27350cb-2b8b-4f39-bc71-a7efd2d56004","Type":"ContainerStarted","Data":"467f0c3a81ad6aeec6a31963cea253334623015f968aff2388da6e76bf8d9ded"} Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.016648 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" podStartSLOduration=4.016632715 podStartE2EDuration="4.016632715s" podCreationTimestamp="2025-12-04 06:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:32.014759957 +0000 UTC m=+1227.627577663" watchObservedRunningTime="2025-12-04 06:29:32.016632715 +0000 UTC m=+1227.629450421" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.018855 4832 generic.go:334] "Generic (PLEG): container finished" podID="deac54af-5a7d-4356-9af2-911f17ab4129" containerID="11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d" exitCode=143 Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.019707 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" event={"ID":"deac54af-5a7d-4356-9af2-911f17ab4129","Type":"ContainerDied","Data":"11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d"} Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.722373 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" path="/var/lib/kubelet/pods/c2a34b59-464e-4a39-9f7a-c4ffe98f53f8/volumes" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.753933 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.868036 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-config-data\") pod \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.868111 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-run-httpd\") pod \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.868175 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-sg-core-conf-yaml\") pod \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.868200 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v77j\" (UniqueName: \"kubernetes.io/projected/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-kube-api-access-2v77j\") pod \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.868306 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-log-httpd\") pod \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.868424 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-combined-ca-bundle\") pod \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.868703 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-scripts\") pod \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\" (UID: \"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2\") " Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.868815 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" (UID: "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.869185 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.869438 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" (UID: "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.875105 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-scripts" (OuterVolumeSpecName: "scripts") pod "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" (UID: "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.875356 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-kube-api-access-2v77j" (OuterVolumeSpecName: "kube-api-access-2v77j") pod "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" (UID: "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2"). InnerVolumeSpecName "kube-api-access-2v77j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.901780 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" (UID: "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.927371 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" (UID: "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.962601 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-config-data" (OuterVolumeSpecName: "config-data") pod "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" (UID: "fe54a9ec-6e1c-4745-95df-4c56a07ce2f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.971315 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.971358 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.971377 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.971392 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.971459 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:32 crc kubenswrapper[4832]: I1204 06:29:32.971473 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v77j\" (UniqueName: \"kubernetes.io/projected/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2-kube-api-access-2v77j\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.032360 4832 generic.go:334] "Generic (PLEG): container finished" podID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerID="61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765" exitCode=0 Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.032487 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.032440 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2","Type":"ContainerDied","Data":"61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765"} Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.032586 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe54a9ec-6e1c-4745-95df-4c56a07ce2f2","Type":"ContainerDied","Data":"903bddc85d77cf9d8d8be5026e008318e94909062450c5228abf702589deaae3"} Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.032611 4832 scope.go:117] "RemoveContainer" containerID="98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.035983 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d27350cb-2b8b-4f39-bc71-a7efd2d56004","Type":"ContainerStarted","Data":"65a9ac6e89292f4f75ab7c31a153e74abd5ec8681d6dc1189f4d966a5db11ea0"} Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.042136 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="88b8fceb-4889-40d2-99cb-23e8ffe00a81" containerName="cinder-api-log" containerID="cri-o://8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4" gracePeriod=30 Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.042272 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="88b8fceb-4889-40d2-99cb-23e8ffe00a81" containerName="cinder-api" containerID="cri-o://a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037" gracePeriod=30 Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.042048 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"88b8fceb-4889-40d2-99cb-23e8ffe00a81","Type":"ContainerStarted","Data":"a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037"} Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.042659 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.073136 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.261435366 podStartE2EDuration="5.073115538s" podCreationTimestamp="2025-12-04 06:29:28 +0000 UTC" firstStartedPulling="2025-12-04 06:29:29.875648091 +0000 UTC m=+1225.488465797" lastFinishedPulling="2025-12-04 06:29:30.687328263 +0000 UTC m=+1226.300145969" observedRunningTime="2025-12-04 06:29:33.062963797 +0000 UTC m=+1228.675781503" watchObservedRunningTime="2025-12-04 06:29:33.073115538 +0000 UTC m=+1228.685933244" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.085521 4832 scope.go:117] "RemoveContainer" containerID="a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.095850 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.095810712 podStartE2EDuration="5.095810712s" podCreationTimestamp="2025-12-04 06:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:33.084132522 +0000 UTC m=+1228.696950228" watchObservedRunningTime="2025-12-04 06:29:33.095810712 +0000 UTC m=+1228.708628418" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.137937 4832 scope.go:117] "RemoveContainer" containerID="61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.172463 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.183966 4832 scope.go:117] "RemoveContainer" containerID="98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1" Dec 04 06:29:33 crc kubenswrapper[4832]: E1204 06:29:33.185088 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1\": container with ID starting with 98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1 not found: ID does not exist" containerID="98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.185142 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1"} err="failed to get container status \"98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1\": rpc error: code = NotFound desc = could not find container \"98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1\": container with ID starting with 98c474046e1a89f4b3df3aa8431ac95c819ef24fd7785a5cd9df33477f5086b1 not found: ID does not exist" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.185178 4832 scope.go:117] "RemoveContainer" containerID="a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3" Dec 04 06:29:33 crc kubenswrapper[4832]: E1204 06:29:33.185674 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3\": container with ID starting with a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3 not found: ID does not exist" containerID="a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.185711 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3"} err="failed to get container status \"a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3\": rpc error: code = NotFound desc = could not find container \"a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3\": container with ID starting with a7a889167e8602579db803303410b4dd27eb56e026906e2ea64c1314041c29a3 not found: ID does not exist" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.185731 4832 scope.go:117] "RemoveContainer" containerID="61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765" Dec 04 06:29:33 crc kubenswrapper[4832]: E1204 06:29:33.186007 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765\": container with ID starting with 61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765 not found: ID does not exist" containerID="61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.186051 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765"} err="failed to get container status \"61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765\": rpc error: code = NotFound desc = could not find container \"61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765\": container with ID starting with 61372d5b114ba9cedb72071e7c4367a954b889045651e6eb072b227feadd5765 not found: ID does not exist" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.216555 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.228497 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:29:33 crc kubenswrapper[4832]: E1204 06:29:33.229323 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" containerName="horizon" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.229347 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" containerName="horizon" Dec 04 06:29:33 crc kubenswrapper[4832]: E1204 06:29:33.229368 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerName="ceilometer-notification-agent" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.229376 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerName="ceilometer-notification-agent" Dec 04 06:29:33 crc kubenswrapper[4832]: E1204 06:29:33.229415 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerName="sg-core" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.229426 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerName="sg-core" Dec 04 06:29:33 crc kubenswrapper[4832]: E1204 06:29:33.229476 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" containerName="horizon-log" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.229485 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" containerName="horizon-log" Dec 04 06:29:33 crc kubenswrapper[4832]: E1204 06:29:33.229523 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerName="proxy-httpd" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.229532 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerName="proxy-httpd" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.229761 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerName="sg-core" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.229784 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerName="ceilometer-notification-agent" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.229803 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" containerName="horizon" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.229812 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" containerName="proxy-httpd" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.229822 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a34b59-464e-4a39-9f7a-c4ffe98f53f8" containerName="horizon-log" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.232071 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.235615 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.237885 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.245407 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.279709 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.279760 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-config-data\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.279790 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv7jv\" (UniqueName: \"kubernetes.io/projected/995325a7-d7b0-4aef-9eaf-fae94df071b8-kube-api-access-qv7jv\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.279838 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995325a7-d7b0-4aef-9eaf-fae94df071b8-run-httpd\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.279888 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995325a7-d7b0-4aef-9eaf-fae94df071b8-log-httpd\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.279922 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-scripts\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.279960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.384847 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.384904 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-config-data\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.384938 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv7jv\" (UniqueName: \"kubernetes.io/projected/995325a7-d7b0-4aef-9eaf-fae94df071b8-kube-api-access-qv7jv\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.384994 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995325a7-d7b0-4aef-9eaf-fae94df071b8-run-httpd\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.385066 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995325a7-d7b0-4aef-9eaf-fae94df071b8-log-httpd\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.385120 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-scripts\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.385169 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.385697 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995325a7-d7b0-4aef-9eaf-fae94df071b8-log-httpd\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.385953 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995325a7-d7b0-4aef-9eaf-fae94df071b8-run-httpd\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.394008 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-scripts\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.409540 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.410857 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.415634 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-config-data\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.415929 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv7jv\" (UniqueName: \"kubernetes.io/projected/995325a7-d7b0-4aef-9eaf-fae94df071b8-kube-api-access-qv7jv\") pod \"ceilometer-0\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.602879 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.675444 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.690274 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88b8fceb-4889-40d2-99cb-23e8ffe00a81-logs\") pod \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.690507 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-config-data\") pod \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.690574 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-combined-ca-bundle\") pod \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.690603 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-scripts\") pod \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.690629 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-config-data-custom\") pod \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.690690 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcr4g\" (UniqueName: \"kubernetes.io/projected/88b8fceb-4889-40d2-99cb-23e8ffe00a81-kube-api-access-rcr4g\") pod \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.690823 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88b8fceb-4889-40d2-99cb-23e8ffe00a81-etc-machine-id\") pod \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\" (UID: \"88b8fceb-4889-40d2-99cb-23e8ffe00a81\") " Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.690867 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88b8fceb-4889-40d2-99cb-23e8ffe00a81-logs" (OuterVolumeSpecName: "logs") pod "88b8fceb-4889-40d2-99cb-23e8ffe00a81" (UID: "88b8fceb-4889-40d2-99cb-23e8ffe00a81"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.691205 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88b8fceb-4889-40d2-99cb-23e8ffe00a81-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.691249 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88b8fceb-4889-40d2-99cb-23e8ffe00a81-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "88b8fceb-4889-40d2-99cb-23e8ffe00a81" (UID: "88b8fceb-4889-40d2-99cb-23e8ffe00a81"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.697961 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-scripts" (OuterVolumeSpecName: "scripts") pod "88b8fceb-4889-40d2-99cb-23e8ffe00a81" (UID: "88b8fceb-4889-40d2-99cb-23e8ffe00a81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.701471 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "88b8fceb-4889-40d2-99cb-23e8ffe00a81" (UID: "88b8fceb-4889-40d2-99cb-23e8ffe00a81"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.701508 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b8fceb-4889-40d2-99cb-23e8ffe00a81-kube-api-access-rcr4g" (OuterVolumeSpecName: "kube-api-access-rcr4g") pod "88b8fceb-4889-40d2-99cb-23e8ffe00a81" (UID: "88b8fceb-4889-40d2-99cb-23e8ffe00a81"). InnerVolumeSpecName "kube-api-access-rcr4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.730746 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88b8fceb-4889-40d2-99cb-23e8ffe00a81" (UID: "88b8fceb-4889-40d2-99cb-23e8ffe00a81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.759219 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-config-data" (OuterVolumeSpecName: "config-data") pod "88b8fceb-4889-40d2-99cb-23e8ffe00a81" (UID: "88b8fceb-4889-40d2-99cb-23e8ffe00a81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.806143 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.808642 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88b8fceb-4889-40d2-99cb-23e8ffe00a81-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.808847 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.808867 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.808878 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.808889 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88b8fceb-4889-40d2-99cb-23e8ffe00a81-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:33 crc kubenswrapper[4832]: I1204 06:29:33.808901 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcr4g\" (UniqueName: \"kubernetes.io/projected/88b8fceb-4889-40d2-99cb-23e8ffe00a81-kube-api-access-rcr4g\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.055862 4832 generic.go:334] "Generic (PLEG): container finished" podID="88b8fceb-4889-40d2-99cb-23e8ffe00a81" containerID="a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037" exitCode=0 Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.056229 4832 generic.go:334] "Generic (PLEG): container finished" podID="88b8fceb-4889-40d2-99cb-23e8ffe00a81" containerID="8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4" exitCode=143 Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.055932 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.055955 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"88b8fceb-4889-40d2-99cb-23e8ffe00a81","Type":"ContainerDied","Data":"a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037"} Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.056492 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"88b8fceb-4889-40d2-99cb-23e8ffe00a81","Type":"ContainerDied","Data":"8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4"} Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.056519 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"88b8fceb-4889-40d2-99cb-23e8ffe00a81","Type":"ContainerDied","Data":"5011f3ace2fab2d4d20abeef90dd0c8634b988e7194722ff3c2c82c3fb7b4aab"} Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.056543 4832 scope.go:117] "RemoveContainer" containerID="a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.114567 4832 scope.go:117] "RemoveContainer" containerID="8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.119682 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5bdbc57ff5-2cpdh" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.124149 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.136612 4832 scope.go:117] "RemoveContainer" containerID="a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037" Dec 04 06:29:34 crc kubenswrapper[4832]: E1204 06:29:34.137133 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037\": container with ID starting with a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037 not found: ID does not exist" containerID="a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.137167 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037"} err="failed to get container status \"a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037\": rpc error: code = NotFound desc = could not find container \"a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037\": container with ID starting with a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037 not found: ID does not exist" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.137191 4832 scope.go:117] "RemoveContainer" containerID="8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4" Dec 04 06:29:34 crc kubenswrapper[4832]: E1204 06:29:34.137428 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4\": container with ID starting with 8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4 not found: ID does not exist" containerID="8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.137450 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4"} err="failed to get container status \"8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4\": rpc error: code = NotFound desc = could not find container \"8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4\": container with ID starting with 8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4 not found: ID does not exist" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.137461 4832 scope.go:117] "RemoveContainer" containerID="a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.137669 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037"} err="failed to get container status \"a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037\": rpc error: code = NotFound desc = could not find container \"a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037\": container with ID starting with a8a88b18ef20894ae11ae4f58292fda2edec2ffd6a44519f7fe4cfd49835c037 not found: ID does not exist" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.137689 4832 scope.go:117] "RemoveContainer" containerID="8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.137854 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4"} err="failed to get container status \"8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4\": rpc error: code = NotFound desc = could not find container \"8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4\": container with ID starting with 8c72c1e42f74f6be0191edca083cd9991ae21b121f37da908b26878470e35df4 not found: ID does not exist" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.150084 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.173132 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 04 06:29:34 crc kubenswrapper[4832]: E1204 06:29:34.173787 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b8fceb-4889-40d2-99cb-23e8ffe00a81" containerName="cinder-api-log" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.173804 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b8fceb-4889-40d2-99cb-23e8ffe00a81" containerName="cinder-api-log" Dec 04 06:29:34 crc kubenswrapper[4832]: E1204 06:29:34.173817 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b8fceb-4889-40d2-99cb-23e8ffe00a81" containerName="cinder-api" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.173823 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b8fceb-4889-40d2-99cb-23e8ffe00a81" containerName="cinder-api" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.174009 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b8fceb-4889-40d2-99cb-23e8ffe00a81" containerName="cinder-api-log" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.174024 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b8fceb-4889-40d2-99cb-23e8ffe00a81" containerName="cinder-api" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.175210 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.184401 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.184630 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.184769 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 04 06:29:34 crc kubenswrapper[4832]: W1204 06:29:34.193542 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod995325a7_d7b0_4aef_9eaf_fae94df071b8.slice/crio-f328edd8f8a404307d12b43d616c88e24c8aced542308f816f063488b8be7812 WatchSource:0}: Error finding container f328edd8f8a404307d12b43d616c88e24c8aced542308f816f063488b8be7812: Status 404 returned error can't find the container with id f328edd8f8a404307d12b43d616c88e24c8aced542308f816f063488b8be7812 Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.197076 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.233138 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.247230 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c689555f6-jht44"] Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.247571 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c689555f6-jht44" podUID="2d4a9484-df22-4c06-bd80-b71c9b785d40" containerName="neutron-api" containerID="cri-o://ce8bff225dacd20b8d02aed97dc869d6b47eea08e3c8f940389f1372a9a8fec0" gracePeriod=30 Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.247927 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c689555f6-jht44" podUID="2d4a9484-df22-4c06-bd80-b71c9b785d40" containerName="neutron-httpd" containerID="cri-o://a200f72b817a2525eff1d166330bf820654a903e28cce83bc5e70d422431c528" gracePeriod=30 Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.337828 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-scripts\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.337903 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-config-data\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.338193 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.338281 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5g7c\" (UniqueName: \"kubernetes.io/projected/9b68da07-a347-452b-85c0-ba171d852d15-kube-api-access-g5g7c\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.338497 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-config-data-custom\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.338695 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.338737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b68da07-a347-452b-85c0-ba171d852d15-logs\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.338920 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b68da07-a347-452b-85c0-ba171d852d15-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.338953 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.440909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.440953 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b68da07-a347-452b-85c0-ba171d852d15-logs\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.441015 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b68da07-a347-452b-85c0-ba171d852d15-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.441041 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.441073 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-scripts\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.441098 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-config-data\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.441145 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.441168 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5g7c\" (UniqueName: \"kubernetes.io/projected/9b68da07-a347-452b-85c0-ba171d852d15-kube-api-access-g5g7c\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.441216 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-config-data-custom\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.442476 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b68da07-a347-452b-85c0-ba171d852d15-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.442575 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b68da07-a347-452b-85c0-ba171d852d15-logs\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.448347 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.449351 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.449590 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-scripts\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.451877 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.452917 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-config-data\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.460123 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b68da07-a347-452b-85c0-ba171d852d15-config-data-custom\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.461879 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5g7c\" (UniqueName: \"kubernetes.io/projected/9b68da07-a347-452b-85c0-ba171d852d15-kube-api-access-g5g7c\") pod \"cinder-api-0\" (UID: \"9b68da07-a347-452b-85c0-ba171d852d15\") " pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.564361 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.760292 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88b8fceb-4889-40d2-99cb-23e8ffe00a81" path="/var/lib/kubelet/pods/88b8fceb-4889-40d2-99cb-23e8ffe00a81/volumes" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.761677 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe54a9ec-6e1c-4745-95df-4c56a07ce2f2" path="/var/lib/kubelet/pods/fe54a9ec-6e1c-4745-95df-4c56a07ce2f2/volumes" Dec 04 06:29:34 crc kubenswrapper[4832]: I1204 06:29:34.977204 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.069104 4832 generic.go:334] "Generic (PLEG): container finished" podID="deac54af-5a7d-4356-9af2-911f17ab4129" containerID="31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00" exitCode=0 Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.069223 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" event={"ID":"deac54af-5a7d-4356-9af2-911f17ab4129","Type":"ContainerDied","Data":"31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00"} Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.069276 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" event={"ID":"deac54af-5a7d-4356-9af2-911f17ab4129","Type":"ContainerDied","Data":"31271d54ed658b9cb5f4b9c318fe490528e1b427ed32bc89cd52613fcdbb8d17"} Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.069302 4832 scope.go:117] "RemoveContainer" containerID="31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.069476 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64fd7ddbcd-4qjbk" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.069517 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deac54af-5a7d-4356-9af2-911f17ab4129-logs\") pod \"deac54af-5a7d-4356-9af2-911f17ab4129\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.069560 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-combined-ca-bundle\") pod \"deac54af-5a7d-4356-9af2-911f17ab4129\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.069736 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhsxv\" (UniqueName: \"kubernetes.io/projected/deac54af-5a7d-4356-9af2-911f17ab4129-kube-api-access-lhsxv\") pod \"deac54af-5a7d-4356-9af2-911f17ab4129\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.069780 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-config-data\") pod \"deac54af-5a7d-4356-9af2-911f17ab4129\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.069806 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-config-data-custom\") pod \"deac54af-5a7d-4356-9af2-911f17ab4129\" (UID: \"deac54af-5a7d-4356-9af2-911f17ab4129\") " Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.071061 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deac54af-5a7d-4356-9af2-911f17ab4129-logs" (OuterVolumeSpecName: "logs") pod "deac54af-5a7d-4356-9af2-911f17ab4129" (UID: "deac54af-5a7d-4356-9af2-911f17ab4129"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.079541 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deac54af-5a7d-4356-9af2-911f17ab4129-kube-api-access-lhsxv" (OuterVolumeSpecName: "kube-api-access-lhsxv") pod "deac54af-5a7d-4356-9af2-911f17ab4129" (UID: "deac54af-5a7d-4356-9af2-911f17ab4129"). InnerVolumeSpecName "kube-api-access-lhsxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.089843 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "deac54af-5a7d-4356-9af2-911f17ab4129" (UID: "deac54af-5a7d-4356-9af2-911f17ab4129"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.095925 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995325a7-d7b0-4aef-9eaf-fae94df071b8","Type":"ContainerStarted","Data":"e3ecacdcefa5304b6b99296f1ac7b6981da412c32eb14e1ee8739ce62b2dd88d"} Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.095986 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995325a7-d7b0-4aef-9eaf-fae94df071b8","Type":"ContainerStarted","Data":"f328edd8f8a404307d12b43d616c88e24c8aced542308f816f063488b8be7812"} Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.109255 4832 generic.go:334] "Generic (PLEG): container finished" podID="2d4a9484-df22-4c06-bd80-b71c9b785d40" containerID="a200f72b817a2525eff1d166330bf820654a903e28cce83bc5e70d422431c528" exitCode=0 Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.109346 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c689555f6-jht44" event={"ID":"2d4a9484-df22-4c06-bd80-b71c9b785d40","Type":"ContainerDied","Data":"a200f72b817a2525eff1d166330bf820654a903e28cce83bc5e70d422431c528"} Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.112453 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deac54af-5a7d-4356-9af2-911f17ab4129" (UID: "deac54af-5a7d-4356-9af2-911f17ab4129"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.138704 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.151874 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-config-data" (OuterVolumeSpecName: "config-data") pod "deac54af-5a7d-4356-9af2-911f17ab4129" (UID: "deac54af-5a7d-4356-9af2-911f17ab4129"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:35 crc kubenswrapper[4832]: W1204 06:29:35.151899 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b68da07_a347_452b_85c0_ba171d852d15.slice/crio-ac13e5012c670fcffd9a379b89b5bffa5eba2b53784f77ed00108cbaa18537e2 WatchSource:0}: Error finding container ac13e5012c670fcffd9a379b89b5bffa5eba2b53784f77ed00108cbaa18537e2: Status 404 returned error can't find the container with id ac13e5012c670fcffd9a379b89b5bffa5eba2b53784f77ed00108cbaa18537e2 Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.172723 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhsxv\" (UniqueName: \"kubernetes.io/projected/deac54af-5a7d-4356-9af2-911f17ab4129-kube-api-access-lhsxv\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.172759 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.172772 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.172787 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deac54af-5a7d-4356-9af2-911f17ab4129-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.172840 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deac54af-5a7d-4356-9af2-911f17ab4129-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.284566 4832 scope.go:117] "RemoveContainer" containerID="11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.308799 4832 scope.go:117] "RemoveContainer" containerID="31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00" Dec 04 06:29:35 crc kubenswrapper[4832]: E1204 06:29:35.309432 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00\": container with ID starting with 31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00 not found: ID does not exist" containerID="31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.309514 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00"} err="failed to get container status \"31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00\": rpc error: code = NotFound desc = could not find container \"31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00\": container with ID starting with 31053692dbe911bcf9af725fbe44781eac5bd27e0162c34cfee3444945b72a00 not found: ID does not exist" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.309597 4832 scope.go:117] "RemoveContainer" containerID="11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d" Dec 04 06:29:35 crc kubenswrapper[4832]: E1204 06:29:35.310001 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d\": container with ID starting with 11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d not found: ID does not exist" containerID="11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.310054 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d"} err="failed to get container status \"11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d\": rpc error: code = NotFound desc = could not find container \"11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d\": container with ID starting with 11f0af6d56f6444b139e3f49dd2ad42d365f90d78ec9beff6e03849d11c0bc7d not found: ID does not exist" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.362564 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.362637 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.432693 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64fd7ddbcd-4qjbk"] Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.451358 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-64fd7ddbcd-4qjbk"] Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.791586 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:35 crc kubenswrapper[4832]: I1204 06:29:35.832819 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56555b86cd-htxqh" Dec 04 06:29:36 crc kubenswrapper[4832]: I1204 06:29:36.159989 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9b68da07-a347-452b-85c0-ba171d852d15","Type":"ContainerStarted","Data":"9c01e2d02c9ea38d9ff32e35e05e384fb60bf7850f118e02cb1c215124cd7039"} Dec 04 06:29:36 crc kubenswrapper[4832]: I1204 06:29:36.160933 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9b68da07-a347-452b-85c0-ba171d852d15","Type":"ContainerStarted","Data":"ac13e5012c670fcffd9a379b89b5bffa5eba2b53784f77ed00108cbaa18537e2"} Dec 04 06:29:36 crc kubenswrapper[4832]: I1204 06:29:36.167694 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995325a7-d7b0-4aef-9eaf-fae94df071b8","Type":"ContainerStarted","Data":"b3418721332c3f211f24bf3a9443fcd961a19a1ad33f9fb6c8b05ce89cd82fc4"} Dec 04 06:29:36 crc kubenswrapper[4832]: I1204 06:29:36.725203 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deac54af-5a7d-4356-9af2-911f17ab4129" path="/var/lib/kubelet/pods/deac54af-5a7d-4356-9af2-911f17ab4129/volumes" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.178930 4832 generic.go:334] "Generic (PLEG): container finished" podID="2d4a9484-df22-4c06-bd80-b71c9b785d40" containerID="ce8bff225dacd20b8d02aed97dc869d6b47eea08e3c8f940389f1372a9a8fec0" exitCode=0 Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.179287 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c689555f6-jht44" event={"ID":"2d4a9484-df22-4c06-bd80-b71c9b785d40","Type":"ContainerDied","Data":"ce8bff225dacd20b8d02aed97dc869d6b47eea08e3c8f940389f1372a9a8fec0"} Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.179322 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c689555f6-jht44" event={"ID":"2d4a9484-df22-4c06-bd80-b71c9b785d40","Type":"ContainerDied","Data":"0239f2435af0e62c4f37dfcf7d3450c06564ac18145684751a5c10fb245e09e1"} Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.179334 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0239f2435af0e62c4f37dfcf7d3450c06564ac18145684751a5c10fb245e09e1" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.181089 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9b68da07-a347-452b-85c0-ba171d852d15","Type":"ContainerStarted","Data":"3cd0b6b33539d76f7fc4977bbf032fa8b76bf669d072219ce5f9b64d6eac9fcb"} Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.182345 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.184979 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995325a7-d7b0-4aef-9eaf-fae94df071b8","Type":"ContainerStarted","Data":"e1aee958b8bf582fedf3e11836b21bfff54e81b9ca52ab237ece774925fea69f"} Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.208213 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.208196323 podStartE2EDuration="3.208196323s" podCreationTimestamp="2025-12-04 06:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:37.202671536 +0000 UTC m=+1232.815489242" watchObservedRunningTime="2025-12-04 06:29:37.208196323 +0000 UTC m=+1232.821014029" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.263057 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.444718 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6v\" (UniqueName: \"kubernetes.io/projected/2d4a9484-df22-4c06-bd80-b71c9b785d40-kube-api-access-zth6v\") pod \"2d4a9484-df22-4c06-bd80-b71c9b785d40\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.444810 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-combined-ca-bundle\") pod \"2d4a9484-df22-4c06-bd80-b71c9b785d40\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.444970 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-httpd-config\") pod \"2d4a9484-df22-4c06-bd80-b71c9b785d40\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.445019 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-config\") pod \"2d4a9484-df22-4c06-bd80-b71c9b785d40\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.445056 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-ovndb-tls-certs\") pod \"2d4a9484-df22-4c06-bd80-b71c9b785d40\" (UID: \"2d4a9484-df22-4c06-bd80-b71c9b785d40\") " Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.455501 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4a9484-df22-4c06-bd80-b71c9b785d40-kube-api-access-zth6v" (OuterVolumeSpecName: "kube-api-access-zth6v") pod "2d4a9484-df22-4c06-bd80-b71c9b785d40" (UID: "2d4a9484-df22-4c06-bd80-b71c9b785d40"). InnerVolumeSpecName "kube-api-access-zth6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.459555 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2d4a9484-df22-4c06-bd80-b71c9b785d40" (UID: "2d4a9484-df22-4c06-bd80-b71c9b785d40"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.504750 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d4a9484-df22-4c06-bd80-b71c9b785d40" (UID: "2d4a9484-df22-4c06-bd80-b71c9b785d40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.512332 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-config" (OuterVolumeSpecName: "config") pod "2d4a9484-df22-4c06-bd80-b71c9b785d40" (UID: "2d4a9484-df22-4c06-bd80-b71c9b785d40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.547200 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zth6v\" (UniqueName: \"kubernetes.io/projected/2d4a9484-df22-4c06-bd80-b71c9b785d40-kube-api-access-zth6v\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.547233 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.547245 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.547254 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.547876 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2d4a9484-df22-4c06-bd80-b71c9b785d40" (UID: "2d4a9484-df22-4c06-bd80-b71c9b785d40"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:37 crc kubenswrapper[4832]: I1204 06:29:37.649719 4832 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4a9484-df22-4c06-bd80-b71c9b785d40-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:38 crc kubenswrapper[4832]: I1204 06:29:38.191734 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c689555f6-jht44" Dec 04 06:29:38 crc kubenswrapper[4832]: I1204 06:29:38.233911 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c689555f6-jht44"] Dec 04 06:29:38 crc kubenswrapper[4832]: I1204 06:29:38.242685 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7c689555f6-jht44"] Dec 04 06:29:38 crc kubenswrapper[4832]: I1204 06:29:38.721701 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d4a9484-df22-4c06-bd80-b71c9b785d40" path="/var/lib/kubelet/pods/2d4a9484-df22-4c06-bd80-b71c9b785d40/volumes" Dec 04 06:29:38 crc kubenswrapper[4832]: I1204 06:29:38.905591 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:29:38 crc kubenswrapper[4832]: I1204 06:29:38.964244 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-87ghz"] Dec 04 06:29:38 crc kubenswrapper[4832]: I1204 06:29:38.968546 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" podUID="1c1f0f22-b600-4323-9799-b0d2125a8ce7" containerName="dnsmasq-dns" containerID="cri-o://0f79130ad9c9f4734d9097aa97a410105810e0a2a04c14679cba498f1f1a5e03" gracePeriod=10 Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.068024 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.114642 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.262408 4832 generic.go:334] "Generic (PLEG): container finished" podID="1c1f0f22-b600-4323-9799-b0d2125a8ce7" containerID="0f79130ad9c9f4734d9097aa97a410105810e0a2a04c14679cba498f1f1a5e03" exitCode=0 Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.262509 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" event={"ID":"1c1f0f22-b600-4323-9799-b0d2125a8ce7","Type":"ContainerDied","Data":"0f79130ad9c9f4734d9097aa97a410105810e0a2a04c14679cba498f1f1a5e03"} Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.278476 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995325a7-d7b0-4aef-9eaf-fae94df071b8","Type":"ContainerStarted","Data":"bac97039fdf1c8c2a59fde3c888dadf51cf42fa5cfd52873b61d3d3f0734946f"} Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.278922 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d27350cb-2b8b-4f39-bc71-a7efd2d56004" containerName="cinder-scheduler" containerID="cri-o://467f0c3a81ad6aeec6a31963cea253334623015f968aff2388da6e76bf8d9ded" gracePeriod=30 Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.280276 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d27350cb-2b8b-4f39-bc71-a7efd2d56004" containerName="probe" containerID="cri-o://65a9ac6e89292f4f75ab7c31a153e74abd5ec8681d6dc1189f4d966a5db11ea0" gracePeriod=30 Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.328986 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.769137778 podStartE2EDuration="6.328963354s" podCreationTimestamp="2025-12-04 06:29:33 +0000 UTC" firstStartedPulling="2025-12-04 06:29:34.196448531 +0000 UTC m=+1229.809266237" lastFinishedPulling="2025-12-04 06:29:37.756274107 +0000 UTC m=+1233.369091813" observedRunningTime="2025-12-04 06:29:39.314127037 +0000 UTC m=+1234.926944753" watchObservedRunningTime="2025-12-04 06:29:39.328963354 +0000 UTC m=+1234.941781060" Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.531890 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.623007 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5fb459446f-clqb5" Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.663798 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.796016 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-config\") pod \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.796327 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-ovsdbserver-sb\") pod \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.796622 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-dns-swift-storage-0\") pod \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.796730 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-dns-svc\") pod \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.796890 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-ovsdbserver-nb\") pod \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.797045 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46ksb\" (UniqueName: \"kubernetes.io/projected/1c1f0f22-b600-4323-9799-b0d2125a8ce7-kube-api-access-46ksb\") pod \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\" (UID: \"1c1f0f22-b600-4323-9799-b0d2125a8ce7\") " Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.808311 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c1f0f22-b600-4323-9799-b0d2125a8ce7-kube-api-access-46ksb" (OuterVolumeSpecName: "kube-api-access-46ksb") pod "1c1f0f22-b600-4323-9799-b0d2125a8ce7" (UID: "1c1f0f22-b600-4323-9799-b0d2125a8ce7"). InnerVolumeSpecName "kube-api-access-46ksb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.832363 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.915304 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46ksb\" (UniqueName: \"kubernetes.io/projected/1c1f0f22-b600-4323-9799-b0d2125a8ce7-kube-api-access-46ksb\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.930960 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c1f0f22-b600-4323-9799-b0d2125a8ce7" (UID: "1c1f0f22-b600-4323-9799-b0d2125a8ce7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.941930 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c1f0f22-b600-4323-9799-b0d2125a8ce7" (UID: "1c1f0f22-b600-4323-9799-b0d2125a8ce7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.954879 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c1f0f22-b600-4323-9799-b0d2125a8ce7" (UID: "1c1f0f22-b600-4323-9799-b0d2125a8ce7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.956341 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-config" (OuterVolumeSpecName: "config") pod "1c1f0f22-b600-4323-9799-b0d2125a8ce7" (UID: "1c1f0f22-b600-4323-9799-b0d2125a8ce7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:39 crc kubenswrapper[4832]: I1204 06:29:39.963668 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c1f0f22-b600-4323-9799-b0d2125a8ce7" (UID: "1c1f0f22-b600-4323-9799-b0d2125a8ce7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.017066 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.017105 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.017117 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.017127 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.017137 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c1f0f22-b600-4323-9799-b0d2125a8ce7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.287633 4832 generic.go:334] "Generic (PLEG): container finished" podID="d27350cb-2b8b-4f39-bc71-a7efd2d56004" containerID="65a9ac6e89292f4f75ab7c31a153e74abd5ec8681d6dc1189f4d966a5db11ea0" exitCode=0 Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.287724 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d27350cb-2b8b-4f39-bc71-a7efd2d56004","Type":"ContainerDied","Data":"65a9ac6e89292f4f75ab7c31a153e74abd5ec8681d6dc1189f4d966a5db11ea0"} Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.289795 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" event={"ID":"1c1f0f22-b600-4323-9799-b0d2125a8ce7","Type":"ContainerDied","Data":"470b9a50152e90f6f24cf3564fb81c4d66f7bcb0dac928d12264355aa1a8ff40"} Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.289854 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-87ghz" Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.289861 4832 scope.go:117] "RemoveContainer" containerID="0f79130ad9c9f4734d9097aa97a410105810e0a2a04c14679cba498f1f1a5e03" Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.289995 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.311279 4832 scope.go:117] "RemoveContainer" containerID="da9296cae1b1b66f3c5558fc5956e067dfdd3d8158583fdf4d637ad8746d7d5a" Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.326527 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-87ghz"] Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.335580 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-87ghz"] Dec 04 06:29:40 crc kubenswrapper[4832]: I1204 06:29:40.721312 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c1f0f22-b600-4323-9799-b0d2125a8ce7" path="/var/lib/kubelet/pods/1c1f0f22-b600-4323-9799-b0d2125a8ce7/volumes" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.289532 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 04 06:29:41 crc kubenswrapper[4832]: E1204 06:29:41.290195 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deac54af-5a7d-4356-9af2-911f17ab4129" containerName="barbican-api-log" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.290212 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="deac54af-5a7d-4356-9af2-911f17ab4129" containerName="barbican-api-log" Dec 04 06:29:41 crc kubenswrapper[4832]: E1204 06:29:41.290237 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c1f0f22-b600-4323-9799-b0d2125a8ce7" containerName="init" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.290244 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1f0f22-b600-4323-9799-b0d2125a8ce7" containerName="init" Dec 04 06:29:41 crc kubenswrapper[4832]: E1204 06:29:41.290258 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c1f0f22-b600-4323-9799-b0d2125a8ce7" containerName="dnsmasq-dns" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.290264 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1f0f22-b600-4323-9799-b0d2125a8ce7" containerName="dnsmasq-dns" Dec 04 06:29:41 crc kubenswrapper[4832]: E1204 06:29:41.290275 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4a9484-df22-4c06-bd80-b71c9b785d40" containerName="neutron-httpd" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.290283 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4a9484-df22-4c06-bd80-b71c9b785d40" containerName="neutron-httpd" Dec 04 06:29:41 crc kubenswrapper[4832]: E1204 06:29:41.290294 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deac54af-5a7d-4356-9af2-911f17ab4129" containerName="barbican-api" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.290301 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="deac54af-5a7d-4356-9af2-911f17ab4129" containerName="barbican-api" Dec 04 06:29:41 crc kubenswrapper[4832]: E1204 06:29:41.290322 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4a9484-df22-4c06-bd80-b71c9b785d40" containerName="neutron-api" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.290328 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4a9484-df22-4c06-bd80-b71c9b785d40" containerName="neutron-api" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.290510 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d4a9484-df22-4c06-bd80-b71c9b785d40" containerName="neutron-httpd" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.290519 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d4a9484-df22-4c06-bd80-b71c9b785d40" containerName="neutron-api" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.290537 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c1f0f22-b600-4323-9799-b0d2125a8ce7" containerName="dnsmasq-dns" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.290546 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="deac54af-5a7d-4356-9af2-911f17ab4129" containerName="barbican-api" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.290555 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="deac54af-5a7d-4356-9af2-911f17ab4129" containerName="barbican-api-log" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.291144 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.294455 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-njjbm" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.294483 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.299919 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.311179 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.445622 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4936447c-abfd-4bad-b720-db17f1bca70c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4936447c-abfd-4bad-b720-db17f1bca70c\") " pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.446105 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4936447c-abfd-4bad-b720-db17f1bca70c-openstack-config-secret\") pod \"openstackclient\" (UID: \"4936447c-abfd-4bad-b720-db17f1bca70c\") " pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.446172 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72rmk\" (UniqueName: \"kubernetes.io/projected/4936447c-abfd-4bad-b720-db17f1bca70c-kube-api-access-72rmk\") pod \"openstackclient\" (UID: \"4936447c-abfd-4bad-b720-db17f1bca70c\") " pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.446482 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4936447c-abfd-4bad-b720-db17f1bca70c-openstack-config\") pod \"openstackclient\" (UID: \"4936447c-abfd-4bad-b720-db17f1bca70c\") " pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.548282 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4936447c-abfd-4bad-b720-db17f1bca70c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4936447c-abfd-4bad-b720-db17f1bca70c\") " pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.548426 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4936447c-abfd-4bad-b720-db17f1bca70c-openstack-config-secret\") pod \"openstackclient\" (UID: \"4936447c-abfd-4bad-b720-db17f1bca70c\") " pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.548454 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72rmk\" (UniqueName: \"kubernetes.io/projected/4936447c-abfd-4bad-b720-db17f1bca70c-kube-api-access-72rmk\") pod \"openstackclient\" (UID: \"4936447c-abfd-4bad-b720-db17f1bca70c\") " pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.548496 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4936447c-abfd-4bad-b720-db17f1bca70c-openstack-config\") pod \"openstackclient\" (UID: \"4936447c-abfd-4bad-b720-db17f1bca70c\") " pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.549634 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4936447c-abfd-4bad-b720-db17f1bca70c-openstack-config\") pod \"openstackclient\" (UID: \"4936447c-abfd-4bad-b720-db17f1bca70c\") " pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.584666 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4936447c-abfd-4bad-b720-db17f1bca70c-openstack-config-secret\") pod \"openstackclient\" (UID: \"4936447c-abfd-4bad-b720-db17f1bca70c\") " pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.584770 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4936447c-abfd-4bad-b720-db17f1bca70c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4936447c-abfd-4bad-b720-db17f1bca70c\") " pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.587061 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72rmk\" (UniqueName: \"kubernetes.io/projected/4936447c-abfd-4bad-b720-db17f1bca70c-kube-api-access-72rmk\") pod \"openstackclient\" (UID: \"4936447c-abfd-4bad-b720-db17f1bca70c\") " pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.604269 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-847bcdcbb8-ph9ks" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.624279 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.682058 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-587db8c9db-9blcn"] Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.682298 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-587db8c9db-9blcn" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon-log" containerID="cri-o://e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6" gracePeriod=30 Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.682999 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-587db8c9db-9blcn" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon" containerID="cri-o://d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339" gracePeriod=30 Dec 04 06:29:41 crc kubenswrapper[4832]: I1204 06:29:41.695462 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-587db8c9db-9blcn" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Dec 04 06:29:42 crc kubenswrapper[4832]: I1204 06:29:42.189045 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 04 06:29:42 crc kubenswrapper[4832]: I1204 06:29:42.323141 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4936447c-abfd-4bad-b720-db17f1bca70c","Type":"ContainerStarted","Data":"e4fb6e47a523c359a0690ee46c9dd3a6a4d8922d625199768d805bd0ca6c2304"} Dec 04 06:29:43 crc kubenswrapper[4832]: I1204 06:29:43.338964 4832 generic.go:334] "Generic (PLEG): container finished" podID="d27350cb-2b8b-4f39-bc71-a7efd2d56004" containerID="467f0c3a81ad6aeec6a31963cea253334623015f968aff2388da6e76bf8d9ded" exitCode=0 Dec 04 06:29:43 crc kubenswrapper[4832]: I1204 06:29:43.339041 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d27350cb-2b8b-4f39-bc71-a7efd2d56004","Type":"ContainerDied","Data":"467f0c3a81ad6aeec6a31963cea253334623015f968aff2388da6e76bf8d9ded"} Dec 04 06:29:43 crc kubenswrapper[4832]: I1204 06:29:43.829545 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 04 06:29:43 crc kubenswrapper[4832]: I1204 06:29:43.995499 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-config-data\") pod \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " Dec 04 06:29:43 crc kubenswrapper[4832]: I1204 06:29:43.995787 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-combined-ca-bundle\") pod \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " Dec 04 06:29:43 crc kubenswrapper[4832]: I1204 06:29:43.996489 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-scripts\") pod \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " Dec 04 06:29:43 crc kubenswrapper[4832]: I1204 06:29:43.996522 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkpf4\" (UniqueName: \"kubernetes.io/projected/d27350cb-2b8b-4f39-bc71-a7efd2d56004-kube-api-access-tkpf4\") pod \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " Dec 04 06:29:43 crc kubenswrapper[4832]: I1204 06:29:43.996582 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d27350cb-2b8b-4f39-bc71-a7efd2d56004-etc-machine-id\") pod \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " Dec 04 06:29:43 crc kubenswrapper[4832]: I1204 06:29:43.996622 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-config-data-custom\") pod \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\" (UID: \"d27350cb-2b8b-4f39-bc71-a7efd2d56004\") " Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:43.997862 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d27350cb-2b8b-4f39-bc71-a7efd2d56004-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d27350cb-2b8b-4f39-bc71-a7efd2d56004" (UID: "d27350cb-2b8b-4f39-bc71-a7efd2d56004"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.003248 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d27350cb-2b8b-4f39-bc71-a7efd2d56004-kube-api-access-tkpf4" (OuterVolumeSpecName: "kube-api-access-tkpf4") pod "d27350cb-2b8b-4f39-bc71-a7efd2d56004" (UID: "d27350cb-2b8b-4f39-bc71-a7efd2d56004"). InnerVolumeSpecName "kube-api-access-tkpf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.006490 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-scripts" (OuterVolumeSpecName: "scripts") pod "d27350cb-2b8b-4f39-bc71-a7efd2d56004" (UID: "d27350cb-2b8b-4f39-bc71-a7efd2d56004"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.009642 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d27350cb-2b8b-4f39-bc71-a7efd2d56004" (UID: "d27350cb-2b8b-4f39-bc71-a7efd2d56004"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.077814 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d27350cb-2b8b-4f39-bc71-a7efd2d56004" (UID: "d27350cb-2b8b-4f39-bc71-a7efd2d56004"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.100307 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.100469 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.100548 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkpf4\" (UniqueName: \"kubernetes.io/projected/d27350cb-2b8b-4f39-bc71-a7efd2d56004-kube-api-access-tkpf4\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.100634 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d27350cb-2b8b-4f39-bc71-a7efd2d56004-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.100696 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.114501 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-config-data" (OuterVolumeSpecName: "config-data") pod "d27350cb-2b8b-4f39-bc71-a7efd2d56004" (UID: "d27350cb-2b8b-4f39-bc71-a7efd2d56004"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.202207 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27350cb-2b8b-4f39-bc71-a7efd2d56004-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.357409 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d27350cb-2b8b-4f39-bc71-a7efd2d56004","Type":"ContainerDied","Data":"93068733f4d9025e6dd824643e0a219aee3ceceb9086dc271e12f1b05d5ac87b"} Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.357482 4832 scope.go:117] "RemoveContainer" containerID="65a9ac6e89292f4f75ab7c31a153e74abd5ec8681d6dc1189f4d966a5db11ea0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.357610 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.427587 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.441927 4832 scope.go:117] "RemoveContainer" containerID="467f0c3a81ad6aeec6a31963cea253334623015f968aff2388da6e76bf8d9ded" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.449599 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.464140 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 06:29:44 crc kubenswrapper[4832]: E1204 06:29:44.464552 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27350cb-2b8b-4f39-bc71-a7efd2d56004" containerName="probe" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.464567 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27350cb-2b8b-4f39-bc71-a7efd2d56004" containerName="probe" Dec 04 06:29:44 crc kubenswrapper[4832]: E1204 06:29:44.464594 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27350cb-2b8b-4f39-bc71-a7efd2d56004" containerName="cinder-scheduler" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.464601 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27350cb-2b8b-4f39-bc71-a7efd2d56004" containerName="cinder-scheduler" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.464775 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27350cb-2b8b-4f39-bc71-a7efd2d56004" containerName="cinder-scheduler" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.464789 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27350cb-2b8b-4f39-bc71-a7efd2d56004" containerName="probe" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.465783 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.471186 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.475237 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.610766 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-scripts\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.610830 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.610900 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-config-data\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.610940 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.611125 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.611187 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmt22\" (UniqueName: \"kubernetes.io/projected/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-kube-api-access-fmt22\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.713795 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.713869 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-scripts\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.713890 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.714036 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-config-data\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.714147 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.714246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.714292 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmt22\" (UniqueName: \"kubernetes.io/projected/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-kube-api-access-fmt22\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.720223 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.722036 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.724448 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-config-data\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.724904 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-scripts\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.726621 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d27350cb-2b8b-4f39-bc71-a7efd2d56004" path="/var/lib/kubelet/pods/d27350cb-2b8b-4f39-bc71-a7efd2d56004/volumes" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.732103 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmt22\" (UniqueName: \"kubernetes.io/projected/3fc4e266-71ba-403f-a4a3-6c7dc12995b7-kube-api-access-fmt22\") pod \"cinder-scheduler-0\" (UID: \"3fc4e266-71ba-403f-a4a3-6c7dc12995b7\") " pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.797137 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 04 06:29:44 crc kubenswrapper[4832]: I1204 06:29:44.841027 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-587db8c9db-9blcn" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:36342->10.217.0.142:8443: read: connection reset by peer" Dec 04 06:29:45 crc kubenswrapper[4832]: I1204 06:29:45.325116 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 06:29:45 crc kubenswrapper[4832]: W1204 06:29:45.327508 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc4e266_71ba_403f_a4a3_6c7dc12995b7.slice/crio-f3d4bb5157858d36a92fa256ab128071348eed4fc5d8aea92b8db9b058be0d37 WatchSource:0}: Error finding container f3d4bb5157858d36a92fa256ab128071348eed4fc5d8aea92b8db9b058be0d37: Status 404 returned error can't find the container with id f3d4bb5157858d36a92fa256ab128071348eed4fc5d8aea92b8db9b058be0d37 Dec 04 06:29:45 crc kubenswrapper[4832]: I1204 06:29:45.378846 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3fc4e266-71ba-403f-a4a3-6c7dc12995b7","Type":"ContainerStarted","Data":"f3d4bb5157858d36a92fa256ab128071348eed4fc5d8aea92b8db9b058be0d37"} Dec 04 06:29:45 crc kubenswrapper[4832]: I1204 06:29:45.384636 4832 generic.go:334] "Generic (PLEG): container finished" podID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerID="d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339" exitCode=0 Dec 04 06:29:45 crc kubenswrapper[4832]: I1204 06:29:45.384697 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587db8c9db-9blcn" event={"ID":"a6361378-b3ff-41c4-a77e-3bb4a1482984","Type":"ContainerDied","Data":"d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339"} Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.152506 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.153525 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="875b6361-178d-40f6-b4d0-328dd939c7c1" containerName="glance-log" containerID="cri-o://205dc7e0e8a170782eaced157939013eb65c23f71592c0afdd68a0bea3ecc85e" gracePeriod=30 Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.154074 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="875b6361-178d-40f6-b4d0-328dd939c7c1" containerName="glance-httpd" containerID="cri-o://df25075882d3aa12187faabeadf1bdaf9bc0b4998db66a662db347de86b50993" gracePeriod=30 Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.222412 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-8675b9cf45-xl2pz"] Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.224505 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.229423 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.229664 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.229906 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.239250 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8675b9cf45-xl2pz"] Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.385575 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-log-httpd\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.385636 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-internal-tls-certs\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.385693 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92tcr\" (UniqueName: \"kubernetes.io/projected/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-kube-api-access-92tcr\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.385799 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-config-data\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.385846 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-public-tls-certs\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.385936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-etc-swift\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.385962 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-run-httpd\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.386034 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-combined-ca-bundle\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.410740 4832 generic.go:334] "Generic (PLEG): container finished" podID="875b6361-178d-40f6-b4d0-328dd939c7c1" containerID="205dc7e0e8a170782eaced157939013eb65c23f71592c0afdd68a0bea3ecc85e" exitCode=143 Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.411118 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"875b6361-178d-40f6-b4d0-328dd939c7c1","Type":"ContainerDied","Data":"205dc7e0e8a170782eaced157939013eb65c23f71592c0afdd68a0bea3ecc85e"} Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.418101 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3fc4e266-71ba-403f-a4a3-6c7dc12995b7","Type":"ContainerStarted","Data":"786aa3094862e44ae1f3f647d7d77566ccf45e6fe0021ce132d2bca2211c17a4"} Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.488464 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-internal-tls-certs\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.488559 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92tcr\" (UniqueName: \"kubernetes.io/projected/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-kube-api-access-92tcr\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.488592 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-config-data\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.488620 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-public-tls-certs\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.488665 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-etc-swift\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.488688 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-run-httpd\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.488734 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-combined-ca-bundle\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.488817 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-log-httpd\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.489366 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-log-httpd\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.490298 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-run-httpd\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.497154 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-internal-tls-certs\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.497160 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-public-tls-certs\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.497516 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-config-data\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.498428 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-etc-swift\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.499048 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-combined-ca-bundle\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.511172 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92tcr\" (UniqueName: \"kubernetes.io/projected/6b7c2a80-b3fe-4243-9ea6-19e34f132a16-kube-api-access-92tcr\") pod \"swift-proxy-8675b9cf45-xl2pz\" (UID: \"6b7c2a80-b3fe-4243-9ea6-19e34f132a16\") " pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.578352 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.653978 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.654577 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="ceilometer-central-agent" containerID="cri-o://e3ecacdcefa5304b6b99296f1ac7b6981da412c32eb14e1ee8739ce62b2dd88d" gracePeriod=30 Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.655013 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="proxy-httpd" containerID="cri-o://bac97039fdf1c8c2a59fde3c888dadf51cf42fa5cfd52873b61d3d3f0734946f" gracePeriod=30 Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.655061 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="sg-core" containerID="cri-o://e1aee958b8bf582fedf3e11836b21bfff54e81b9ca52ab237ece774925fea69f" gracePeriod=30 Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.655099 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="ceilometer-notification-agent" containerID="cri-o://b3418721332c3f211f24bf3a9443fcd961a19a1ad33f9fb6c8b05ce89cd82fc4" gracePeriod=30 Dec 04 06:29:46 crc kubenswrapper[4832]: I1204 06:29:46.683460 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-587db8c9db-9blcn" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.304452 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8675b9cf45-xl2pz"] Dec 04 06:29:47 crc kubenswrapper[4832]: W1204 06:29:47.321918 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b7c2a80_b3fe_4243_9ea6_19e34f132a16.slice/crio-4e96c1cbff9013345b2e66f31452ab0427e31f3fa7ef75d3d2ae534f0bfe3b57 WatchSource:0}: Error finding container 4e96c1cbff9013345b2e66f31452ab0427e31f3fa7ef75d3d2ae534f0bfe3b57: Status 404 returned error can't find the container with id 4e96c1cbff9013345b2e66f31452ab0427e31f3fa7ef75d3d2ae534f0bfe3b57 Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.454985 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3fc4e266-71ba-403f-a4a3-6c7dc12995b7","Type":"ContainerStarted","Data":"dc0f5213bde450816512a3df95856f7c5af37eac2ce4d231cc487eafbcc4f9d7"} Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.458428 4832 generic.go:334] "Generic (PLEG): container finished" podID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerID="bac97039fdf1c8c2a59fde3c888dadf51cf42fa5cfd52873b61d3d3f0734946f" exitCode=0 Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.458456 4832 generic.go:334] "Generic (PLEG): container finished" podID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerID="e1aee958b8bf582fedf3e11836b21bfff54e81b9ca52ab237ece774925fea69f" exitCode=2 Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.458465 4832 generic.go:334] "Generic (PLEG): container finished" podID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerID="b3418721332c3f211f24bf3a9443fcd961a19a1ad33f9fb6c8b05ce89cd82fc4" exitCode=0 Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.458518 4832 generic.go:334] "Generic (PLEG): container finished" podID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerID="e3ecacdcefa5304b6b99296f1ac7b6981da412c32eb14e1ee8739ce62b2dd88d" exitCode=0 Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.458556 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995325a7-d7b0-4aef-9eaf-fae94df071b8","Type":"ContainerDied","Data":"bac97039fdf1c8c2a59fde3c888dadf51cf42fa5cfd52873b61d3d3f0734946f"} Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.458581 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995325a7-d7b0-4aef-9eaf-fae94df071b8","Type":"ContainerDied","Data":"e1aee958b8bf582fedf3e11836b21bfff54e81b9ca52ab237ece774925fea69f"} Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.458590 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995325a7-d7b0-4aef-9eaf-fae94df071b8","Type":"ContainerDied","Data":"b3418721332c3f211f24bf3a9443fcd961a19a1ad33f9fb6c8b05ce89cd82fc4"} Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.458600 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995325a7-d7b0-4aef-9eaf-fae94df071b8","Type":"ContainerDied","Data":"e3ecacdcefa5304b6b99296f1ac7b6981da412c32eb14e1ee8739ce62b2dd88d"} Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.459444 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8675b9cf45-xl2pz" event={"ID":"6b7c2a80-b3fe-4243-9ea6-19e34f132a16","Type":"ContainerStarted","Data":"4e96c1cbff9013345b2e66f31452ab0427e31f3fa7ef75d3d2ae534f0bfe3b57"} Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.493954 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.493936624 podStartE2EDuration="3.493936624s" podCreationTimestamp="2025-12-04 06:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:47.488865038 +0000 UTC m=+1243.101682764" watchObservedRunningTime="2025-12-04 06:29:47.493936624 +0000 UTC m=+1243.106754340" Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.877808 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 04 06:29:47 crc kubenswrapper[4832]: I1204 06:29:47.891561 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.046804 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv7jv\" (UniqueName: \"kubernetes.io/projected/995325a7-d7b0-4aef-9eaf-fae94df071b8-kube-api-access-qv7jv\") pod \"995325a7-d7b0-4aef-9eaf-fae94df071b8\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.046905 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995325a7-d7b0-4aef-9eaf-fae94df071b8-run-httpd\") pod \"995325a7-d7b0-4aef-9eaf-fae94df071b8\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.046939 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-scripts\") pod \"995325a7-d7b0-4aef-9eaf-fae94df071b8\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.047002 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-sg-core-conf-yaml\") pod \"995325a7-d7b0-4aef-9eaf-fae94df071b8\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.047166 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-config-data\") pod \"995325a7-d7b0-4aef-9eaf-fae94df071b8\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.047222 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-combined-ca-bundle\") pod \"995325a7-d7b0-4aef-9eaf-fae94df071b8\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.047286 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995325a7-d7b0-4aef-9eaf-fae94df071b8-log-httpd\") pod \"995325a7-d7b0-4aef-9eaf-fae94df071b8\" (UID: \"995325a7-d7b0-4aef-9eaf-fae94df071b8\") " Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.047730 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/995325a7-d7b0-4aef-9eaf-fae94df071b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "995325a7-d7b0-4aef-9eaf-fae94df071b8" (UID: "995325a7-d7b0-4aef-9eaf-fae94df071b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.047986 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995325a7-d7b0-4aef-9eaf-fae94df071b8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.057959 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/995325a7-d7b0-4aef-9eaf-fae94df071b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "995325a7-d7b0-4aef-9eaf-fae94df071b8" (UID: "995325a7-d7b0-4aef-9eaf-fae94df071b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.062750 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995325a7-d7b0-4aef-9eaf-fae94df071b8-kube-api-access-qv7jv" (OuterVolumeSpecName: "kube-api-access-qv7jv") pod "995325a7-d7b0-4aef-9eaf-fae94df071b8" (UID: "995325a7-d7b0-4aef-9eaf-fae94df071b8"). InnerVolumeSpecName "kube-api-access-qv7jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.078575 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-scripts" (OuterVolumeSpecName: "scripts") pod "995325a7-d7b0-4aef-9eaf-fae94df071b8" (UID: "995325a7-d7b0-4aef-9eaf-fae94df071b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.118822 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "995325a7-d7b0-4aef-9eaf-fae94df071b8" (UID: "995325a7-d7b0-4aef-9eaf-fae94df071b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.150923 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995325a7-d7b0-4aef-9eaf-fae94df071b8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.150966 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv7jv\" (UniqueName: \"kubernetes.io/projected/995325a7-d7b0-4aef-9eaf-fae94df071b8-kube-api-access-qv7jv\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.150980 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.150994 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.257905 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "995325a7-d7b0-4aef-9eaf-fae94df071b8" (UID: "995325a7-d7b0-4aef-9eaf-fae94df071b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.296100 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-config-data" (OuterVolumeSpecName: "config-data") pod "995325a7-d7b0-4aef-9eaf-fae94df071b8" (UID: "995325a7-d7b0-4aef-9eaf-fae94df071b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.354511 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.354547 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995325a7-d7b0-4aef-9eaf-fae94df071b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.491884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8675b9cf45-xl2pz" event={"ID":"6b7c2a80-b3fe-4243-9ea6-19e34f132a16","Type":"ContainerStarted","Data":"1a10db7829949799134f492737b350e2e4217603da3ffefbccfd6fe2a9975456"} Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.491954 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8675b9cf45-xl2pz" event={"ID":"6b7c2a80-b3fe-4243-9ea6-19e34f132a16","Type":"ContainerStarted","Data":"d33aced12b3b82617c6ba92b97d87cdd385c5df582e8772bd3de43bdd8f92b65"} Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.493144 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.493171 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.506839 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.508070 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995325a7-d7b0-4aef-9eaf-fae94df071b8","Type":"ContainerDied","Data":"f328edd8f8a404307d12b43d616c88e24c8aced542308f816f063488b8be7812"} Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.508185 4832 scope.go:117] "RemoveContainer" containerID="bac97039fdf1c8c2a59fde3c888dadf51cf42fa5cfd52873b61d3d3f0734946f" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.519228 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-8675b9cf45-xl2pz" podStartSLOduration=2.519207114 podStartE2EDuration="2.519207114s" podCreationTimestamp="2025-12-04 06:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:48.511907583 +0000 UTC m=+1244.124725289" watchObservedRunningTime="2025-12-04 06:29:48.519207114 +0000 UTC m=+1244.132024820" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.598552 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.626447 4832 scope.go:117] "RemoveContainer" containerID="e1aee958b8bf582fedf3e11836b21bfff54e81b9ca52ab237ece774925fea69f" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.627211 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.656605 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:29:48 crc kubenswrapper[4832]: E1204 06:29:48.657823 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="sg-core" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.657844 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="sg-core" Dec 04 06:29:48 crc kubenswrapper[4832]: E1204 06:29:48.657865 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="ceilometer-central-agent" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.657872 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="ceilometer-central-agent" Dec 04 06:29:48 crc kubenswrapper[4832]: E1204 06:29:48.657893 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="proxy-httpd" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.657900 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="proxy-httpd" Dec 04 06:29:48 crc kubenswrapper[4832]: E1204 06:29:48.657917 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="ceilometer-notification-agent" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.657923 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="ceilometer-notification-agent" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.658087 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="ceilometer-central-agent" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.658108 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="sg-core" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.658119 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="ceilometer-notification-agent" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.658134 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" containerName="proxy-httpd" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.661248 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.666695 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.671818 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.679847 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.689560 4832 scope.go:117] "RemoveContainer" containerID="b3418721332c3f211f24bf3a9443fcd961a19a1ad33f9fb6c8b05ce89cd82fc4" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.733221 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="995325a7-d7b0-4aef-9eaf-fae94df071b8" path="/var/lib/kubelet/pods/995325a7-d7b0-4aef-9eaf-fae94df071b8/volumes" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.741379 4832 scope.go:117] "RemoveContainer" containerID="e3ecacdcefa5304b6b99296f1ac7b6981da412c32eb14e1ee8739ce62b2dd88d" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.769254 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-run-httpd\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.769671 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-log-httpd\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.769988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-config-data\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.770114 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw7gc\" (UniqueName: \"kubernetes.io/projected/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-kube-api-access-mw7gc\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.770165 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.770236 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.770310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-scripts\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.871818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-config-data\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.871875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw7gc\" (UniqueName: \"kubernetes.io/projected/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-kube-api-access-mw7gc\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.871894 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.871923 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.871968 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-scripts\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.872011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-run-httpd\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.872025 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-log-httpd\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.872474 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-log-httpd\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.874139 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-run-httpd\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.879337 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-scripts\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.880046 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.886805 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.888374 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-config-data\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:48 crc kubenswrapper[4832]: I1204 06:29:48.901195 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw7gc\" (UniqueName: \"kubernetes.io/projected/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-kube-api-access-mw7gc\") pod \"ceilometer-0\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " pod="openstack/ceilometer-0" Dec 04 06:29:49 crc kubenswrapper[4832]: I1204 06:29:49.027980 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:29:49 crc kubenswrapper[4832]: I1204 06:29:49.529693 4832 generic.go:334] "Generic (PLEG): container finished" podID="875b6361-178d-40f6-b4d0-328dd939c7c1" containerID="df25075882d3aa12187faabeadf1bdaf9bc0b4998db66a662db347de86b50993" exitCode=0 Dec 04 06:29:49 crc kubenswrapper[4832]: I1204 06:29:49.530262 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"875b6361-178d-40f6-b4d0-328dd939c7c1","Type":"ContainerDied","Data":"df25075882d3aa12187faabeadf1bdaf9bc0b4998db66a662db347de86b50993"} Dec 04 06:29:49 crc kubenswrapper[4832]: I1204 06:29:49.574928 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:29:49 crc kubenswrapper[4832]: W1204 06:29:49.589604 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfdebaa0_328f_4799_90f3_95f6bb41a0e5.slice/crio-2ca5d6dd74136b50251c2e936d3bee1aa7a1e496d4dfae627c8f9fa8aab56555 WatchSource:0}: Error finding container 2ca5d6dd74136b50251c2e936d3bee1aa7a1e496d4dfae627c8f9fa8aab56555: Status 404 returned error can't find the container with id 2ca5d6dd74136b50251c2e936d3bee1aa7a1e496d4dfae627c8f9fa8aab56555 Dec 04 06:29:49 crc kubenswrapper[4832]: I1204 06:29:49.797704 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 04 06:29:50 crc kubenswrapper[4832]: I1204 06:29:50.237148 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:50 crc kubenswrapper[4832]: I1204 06:29:50.237587 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" containerName="glance-log" containerID="cri-o://72840d5c7c99eb8eccb62d3df0b75b1b453c84818474041310afb94270f70700" gracePeriod=30 Dec 04 06:29:50 crc kubenswrapper[4832]: I1204 06:29:50.238233 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" containerName="glance-httpd" containerID="cri-o://2fed1ff76351fbaebeafd36d90b0d252502fa2ebd49439af7909080730f623f7" gracePeriod=30 Dec 04 06:29:50 crc kubenswrapper[4832]: I1204 06:29:50.561735 4832 generic.go:334] "Generic (PLEG): container finished" podID="3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" containerID="72840d5c7c99eb8eccb62d3df0b75b1b453c84818474041310afb94270f70700" exitCode=143 Dec 04 06:29:50 crc kubenswrapper[4832]: I1204 06:29:50.561818 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5","Type":"ContainerDied","Data":"72840d5c7c99eb8eccb62d3df0b75b1b453c84818474041310afb94270f70700"} Dec 04 06:29:50 crc kubenswrapper[4832]: I1204 06:29:50.566833 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfdebaa0-328f-4799-90f3-95f6bb41a0e5","Type":"ContainerStarted","Data":"2ca5d6dd74136b50251c2e936d3bee1aa7a1e496d4dfae627c8f9fa8aab56555"} Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.095930 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vzgts"] Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.097268 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vzgts" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.114302 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vzgts"] Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.179011 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-ch62d"] Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.180716 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ch62d" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.187494 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ch62d"] Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.227619 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a675f899-d638-40e8-a597-daa574df9e75-operator-scripts\") pod \"nova-api-db-create-vzgts\" (UID: \"a675f899-d638-40e8-a597-daa574df9e75\") " pod="openstack/nova-api-db-create-vzgts" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.227770 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwv47\" (UniqueName: \"kubernetes.io/projected/a675f899-d638-40e8-a597-daa574df9e75-kube-api-access-nwv47\") pod \"nova-api-db-create-vzgts\" (UID: \"a675f899-d638-40e8-a597-daa574df9e75\") " pod="openstack/nova-api-db-create-vzgts" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.292746 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-564c-account-create-update-62wgk"] Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.298961 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-564c-account-create-update-62wgk" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.301843 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.308128 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-564c-account-create-update-62wgk"] Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.330008 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwv47\" (UniqueName: \"kubernetes.io/projected/a675f899-d638-40e8-a597-daa574df9e75-kube-api-access-nwv47\") pod \"nova-api-db-create-vzgts\" (UID: \"a675f899-d638-40e8-a597-daa574df9e75\") " pod="openstack/nova-api-db-create-vzgts" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.330097 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxn6g\" (UniqueName: \"kubernetes.io/projected/e4561bf4-2c44-4350-8426-4353129c50cf-kube-api-access-lxn6g\") pod \"nova-cell0-db-create-ch62d\" (UID: \"e4561bf4-2c44-4350-8426-4353129c50cf\") " pod="openstack/nova-cell0-db-create-ch62d" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.330159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a675f899-d638-40e8-a597-daa574df9e75-operator-scripts\") pod \"nova-api-db-create-vzgts\" (UID: \"a675f899-d638-40e8-a597-daa574df9e75\") " pod="openstack/nova-api-db-create-vzgts" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.330280 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4561bf4-2c44-4350-8426-4353129c50cf-operator-scripts\") pod \"nova-cell0-db-create-ch62d\" (UID: \"e4561bf4-2c44-4350-8426-4353129c50cf\") " pod="openstack/nova-cell0-db-create-ch62d" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.331150 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a675f899-d638-40e8-a597-daa574df9e75-operator-scripts\") pod \"nova-api-db-create-vzgts\" (UID: \"a675f899-d638-40e8-a597-daa574df9e75\") " pod="openstack/nova-api-db-create-vzgts" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.366105 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwv47\" (UniqueName: \"kubernetes.io/projected/a675f899-d638-40e8-a597-daa574df9e75-kube-api-access-nwv47\") pod \"nova-api-db-create-vzgts\" (UID: \"a675f899-d638-40e8-a597-daa574df9e75\") " pod="openstack/nova-api-db-create-vzgts" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.394496 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-p55hz"] Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.396561 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p55hz" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.403062 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-p55hz"] Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.455518 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vzgts" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.456899 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4561bf4-2c44-4350-8426-4353129c50cf-operator-scripts\") pod \"nova-cell0-db-create-ch62d\" (UID: \"e4561bf4-2c44-4350-8426-4353129c50cf\") " pod="openstack/nova-cell0-db-create-ch62d" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.456984 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0967a7-c40d-45f5-b32f-ff14f40d5337-operator-scripts\") pod \"nova-api-564c-account-create-update-62wgk\" (UID: \"7f0967a7-c40d-45f5-b32f-ff14f40d5337\") " pod="openstack/nova-api-564c-account-create-update-62wgk" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.457030 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8clqf\" (UniqueName: \"kubernetes.io/projected/7f0967a7-c40d-45f5-b32f-ff14f40d5337-kube-api-access-8clqf\") pod \"nova-api-564c-account-create-update-62wgk\" (UID: \"7f0967a7-c40d-45f5-b32f-ff14f40d5337\") " pod="openstack/nova-api-564c-account-create-update-62wgk" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.457078 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxn6g\" (UniqueName: \"kubernetes.io/projected/e4561bf4-2c44-4350-8426-4353129c50cf-kube-api-access-lxn6g\") pod \"nova-cell0-db-create-ch62d\" (UID: \"e4561bf4-2c44-4350-8426-4353129c50cf\") " pod="openstack/nova-cell0-db-create-ch62d" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.457784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4561bf4-2c44-4350-8426-4353129c50cf-operator-scripts\") pod \"nova-cell0-db-create-ch62d\" (UID: \"e4561bf4-2c44-4350-8426-4353129c50cf\") " pod="openstack/nova-cell0-db-create-ch62d" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.491539 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxn6g\" (UniqueName: \"kubernetes.io/projected/e4561bf4-2c44-4350-8426-4353129c50cf-kube-api-access-lxn6g\") pod \"nova-cell0-db-create-ch62d\" (UID: \"e4561bf4-2c44-4350-8426-4353129c50cf\") " pod="openstack/nova-cell0-db-create-ch62d" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.503877 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ch62d" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.530510 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5b4b-account-create-update-bfrhh"] Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.532094 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.535727 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.549749 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5b4b-account-create-update-bfrhh"] Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.558452 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0967a7-c40d-45f5-b32f-ff14f40d5337-operator-scripts\") pod \"nova-api-564c-account-create-update-62wgk\" (UID: \"7f0967a7-c40d-45f5-b32f-ff14f40d5337\") " pod="openstack/nova-api-564c-account-create-update-62wgk" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.558564 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8clqf\" (UniqueName: \"kubernetes.io/projected/7f0967a7-c40d-45f5-b32f-ff14f40d5337-kube-api-access-8clqf\") pod \"nova-api-564c-account-create-update-62wgk\" (UID: \"7f0967a7-c40d-45f5-b32f-ff14f40d5337\") " pod="openstack/nova-api-564c-account-create-update-62wgk" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.558625 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc636c61-b131-450a-b60b-d205ea0a3c36-operator-scripts\") pod \"nova-cell1-db-create-p55hz\" (UID: \"cc636c61-b131-450a-b60b-d205ea0a3c36\") " pod="openstack/nova-cell1-db-create-p55hz" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.558660 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb56q\" (UniqueName: \"kubernetes.io/projected/cc636c61-b131-450a-b60b-d205ea0a3c36-kube-api-access-tb56q\") pod \"nova-cell1-db-create-p55hz\" (UID: \"cc636c61-b131-450a-b60b-d205ea0a3c36\") " pod="openstack/nova-cell1-db-create-p55hz" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.559829 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0967a7-c40d-45f5-b32f-ff14f40d5337-operator-scripts\") pod \"nova-api-564c-account-create-update-62wgk\" (UID: \"7f0967a7-c40d-45f5-b32f-ff14f40d5337\") " pod="openstack/nova-api-564c-account-create-update-62wgk" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.581952 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8clqf\" (UniqueName: \"kubernetes.io/projected/7f0967a7-c40d-45f5-b32f-ff14f40d5337-kube-api-access-8clqf\") pod \"nova-api-564c-account-create-update-62wgk\" (UID: \"7f0967a7-c40d-45f5-b32f-ff14f40d5337\") " pod="openstack/nova-api-564c-account-create-update-62wgk" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.626070 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-564c-account-create-update-62wgk" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.662862 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjx6j\" (UniqueName: \"kubernetes.io/projected/3922f617-41ab-48fa-a501-caa685e933e0-kube-api-access-sjx6j\") pod \"nova-cell0-5b4b-account-create-update-bfrhh\" (UID: \"3922f617-41ab-48fa-a501-caa685e933e0\") " pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.662923 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3922f617-41ab-48fa-a501-caa685e933e0-operator-scripts\") pod \"nova-cell0-5b4b-account-create-update-bfrhh\" (UID: \"3922f617-41ab-48fa-a501-caa685e933e0\") " pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.662975 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc636c61-b131-450a-b60b-d205ea0a3c36-operator-scripts\") pod \"nova-cell1-db-create-p55hz\" (UID: \"cc636c61-b131-450a-b60b-d205ea0a3c36\") " pod="openstack/nova-cell1-db-create-p55hz" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.662997 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb56q\" (UniqueName: \"kubernetes.io/projected/cc636c61-b131-450a-b60b-d205ea0a3c36-kube-api-access-tb56q\") pod \"nova-cell1-db-create-p55hz\" (UID: \"cc636c61-b131-450a-b60b-d205ea0a3c36\") " pod="openstack/nova-cell1-db-create-p55hz" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.669755 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc636c61-b131-450a-b60b-d205ea0a3c36-operator-scripts\") pod \"nova-cell1-db-create-p55hz\" (UID: \"cc636c61-b131-450a-b60b-d205ea0a3c36\") " pod="openstack/nova-cell1-db-create-p55hz" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.699177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb56q\" (UniqueName: \"kubernetes.io/projected/cc636c61-b131-450a-b60b-d205ea0a3c36-kube-api-access-tb56q\") pod \"nova-cell1-db-create-p55hz\" (UID: \"cc636c61-b131-450a-b60b-d205ea0a3c36\") " pod="openstack/nova-cell1-db-create-p55hz" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.699347 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0b65-account-create-update-r8vb2"] Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.701068 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.709051 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.720497 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0b65-account-create-update-r8vb2"] Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.767756 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjx6j\" (UniqueName: \"kubernetes.io/projected/3922f617-41ab-48fa-a501-caa685e933e0-kube-api-access-sjx6j\") pod \"nova-cell0-5b4b-account-create-update-bfrhh\" (UID: \"3922f617-41ab-48fa-a501-caa685e933e0\") " pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.767821 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3922f617-41ab-48fa-a501-caa685e933e0-operator-scripts\") pod \"nova-cell0-5b4b-account-create-update-bfrhh\" (UID: \"3922f617-41ab-48fa-a501-caa685e933e0\") " pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.768826 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3922f617-41ab-48fa-a501-caa685e933e0-operator-scripts\") pod \"nova-cell0-5b4b-account-create-update-bfrhh\" (UID: \"3922f617-41ab-48fa-a501-caa685e933e0\") " pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.783918 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjx6j\" (UniqueName: \"kubernetes.io/projected/3922f617-41ab-48fa-a501-caa685e933e0-kube-api-access-sjx6j\") pod \"nova-cell0-5b4b-account-create-update-bfrhh\" (UID: \"3922f617-41ab-48fa-a501-caa685e933e0\") " pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.846110 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p55hz" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.857309 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.869627 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp4xv\" (UniqueName: \"kubernetes.io/projected/9531a1a5-90a1-487d-a8ea-532746866ae1-kube-api-access-lp4xv\") pod \"nova-cell1-0b65-account-create-update-r8vb2\" (UID: \"9531a1a5-90a1-487d-a8ea-532746866ae1\") " pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.869913 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9531a1a5-90a1-487d-a8ea-532746866ae1-operator-scripts\") pod \"nova-cell1-0b65-account-create-update-r8vb2\" (UID: \"9531a1a5-90a1-487d-a8ea-532746866ae1\") " pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.971609 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9531a1a5-90a1-487d-a8ea-532746866ae1-operator-scripts\") pod \"nova-cell1-0b65-account-create-update-r8vb2\" (UID: \"9531a1a5-90a1-487d-a8ea-532746866ae1\") " pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.971681 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp4xv\" (UniqueName: \"kubernetes.io/projected/9531a1a5-90a1-487d-a8ea-532746866ae1-kube-api-access-lp4xv\") pod \"nova-cell1-0b65-account-create-update-r8vb2\" (UID: \"9531a1a5-90a1-487d-a8ea-532746866ae1\") " pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.972549 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9531a1a5-90a1-487d-a8ea-532746866ae1-operator-scripts\") pod \"nova-cell1-0b65-account-create-update-r8vb2\" (UID: \"9531a1a5-90a1-487d-a8ea-532746866ae1\") " pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" Dec 04 06:29:51 crc kubenswrapper[4832]: I1204 06:29:51.992845 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp4xv\" (UniqueName: \"kubernetes.io/projected/9531a1a5-90a1-487d-a8ea-532746866ae1-kube-api-access-lp4xv\") pod \"nova-cell1-0b65-account-create-update-r8vb2\" (UID: \"9531a1a5-90a1-487d-a8ea-532746866ae1\") " pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" Dec 04 06:29:52 crc kubenswrapper[4832]: I1204 06:29:52.063982 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" Dec 04 06:29:53 crc kubenswrapper[4832]: I1204 06:29:53.630263 4832 generic.go:334] "Generic (PLEG): container finished" podID="3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" containerID="2fed1ff76351fbaebeafd36d90b0d252502fa2ebd49439af7909080730f623f7" exitCode=0 Dec 04 06:29:53 crc kubenswrapper[4832]: I1204 06:29:53.630342 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5","Type":"ContainerDied","Data":"2fed1ff76351fbaebeafd36d90b0d252502fa2ebd49439af7909080730f623f7"} Dec 04 06:29:55 crc kubenswrapper[4832]: I1204 06:29:55.046221 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.191944 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.362617 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wvvv\" (UniqueName: \"kubernetes.io/projected/875b6361-178d-40f6-b4d0-328dd939c7c1-kube-api-access-5wvvv\") pod \"875b6361-178d-40f6-b4d0-328dd939c7c1\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.363107 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-internal-tls-certs\") pod \"875b6361-178d-40f6-b4d0-328dd939c7c1\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.363146 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"875b6361-178d-40f6-b4d0-328dd939c7c1\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.363245 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-config-data\") pod \"875b6361-178d-40f6-b4d0-328dd939c7c1\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.363271 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-scripts\") pod \"875b6361-178d-40f6-b4d0-328dd939c7c1\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.363324 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/875b6361-178d-40f6-b4d0-328dd939c7c1-logs\") pod \"875b6361-178d-40f6-b4d0-328dd939c7c1\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.363378 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/875b6361-178d-40f6-b4d0-328dd939c7c1-httpd-run\") pod \"875b6361-178d-40f6-b4d0-328dd939c7c1\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.363625 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-combined-ca-bundle\") pod \"875b6361-178d-40f6-b4d0-328dd939c7c1\" (UID: \"875b6361-178d-40f6-b4d0-328dd939c7c1\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.365513 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/875b6361-178d-40f6-b4d0-328dd939c7c1-logs" (OuterVolumeSpecName: "logs") pod "875b6361-178d-40f6-b4d0-328dd939c7c1" (UID: "875b6361-178d-40f6-b4d0-328dd939c7c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.367055 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/875b6361-178d-40f6-b4d0-328dd939c7c1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "875b6361-178d-40f6-b4d0-328dd939c7c1" (UID: "875b6361-178d-40f6-b4d0-328dd939c7c1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.373540 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "875b6361-178d-40f6-b4d0-328dd939c7c1" (UID: "875b6361-178d-40f6-b4d0-328dd939c7c1"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.382005 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875b6361-178d-40f6-b4d0-328dd939c7c1-kube-api-access-5wvvv" (OuterVolumeSpecName: "kube-api-access-5wvvv") pod "875b6361-178d-40f6-b4d0-328dd939c7c1" (UID: "875b6361-178d-40f6-b4d0-328dd939c7c1"). InnerVolumeSpecName "kube-api-access-5wvvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.382051 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-scripts" (OuterVolumeSpecName: "scripts") pod "875b6361-178d-40f6-b4d0-328dd939c7c1" (UID: "875b6361-178d-40f6-b4d0-328dd939c7c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.423992 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "875b6361-178d-40f6-b4d0-328dd939c7c1" (UID: "875b6361-178d-40f6-b4d0-328dd939c7c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.467342 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.467581 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.467590 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/875b6361-178d-40f6-b4d0-328dd939c7c1-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.467599 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/875b6361-178d-40f6-b4d0-328dd939c7c1-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.467608 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.467621 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wvvv\" (UniqueName: \"kubernetes.io/projected/875b6361-178d-40f6-b4d0-328dd939c7c1-kube-api-access-5wvvv\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.500631 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "875b6361-178d-40f6-b4d0-328dd939c7c1" (UID: "875b6361-178d-40f6-b4d0-328dd939c7c1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.517169 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.524708 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-config-data" (OuterVolumeSpecName: "config-data") pod "875b6361-178d-40f6-b4d0-328dd939c7c1" (UID: "875b6361-178d-40f6-b4d0-328dd939c7c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.572861 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.572887 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.572898 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/875b6361-178d-40f6-b4d0-328dd939c7c1-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.589253 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.595812 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8675b9cf45-xl2pz" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.606366 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.680930 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.681117 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6d22\" (UniqueName: \"kubernetes.io/projected/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-kube-api-access-r6d22\") pod \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.681360 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-scripts\") pod \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.681619 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-httpd-run\") pod \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.681748 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-config-data\") pod \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.681867 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-combined-ca-bundle\") pod \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.681986 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-public-tls-certs\") pod \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.682147 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-logs\") pod \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\" (UID: \"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5\") " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.686081 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-logs" (OuterVolumeSpecName: "logs") pod "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" (UID: "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.694221 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" (UID: "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.696381 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-587db8c9db-9blcn" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.698365 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4936447c-abfd-4bad-b720-db17f1bca70c","Type":"ContainerStarted","Data":"ecc09886730718acbf153acb1d9a0fea5b170c0c1f22f7adf8b69fc87be9d81d"} Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.699208 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" (UID: "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.701636 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-scripts" (OuterVolumeSpecName: "scripts") pod "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" (UID: "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.702940 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-kube-api-access-r6d22" (OuterVolumeSpecName: "kube-api-access-r6d22") pod "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" (UID: "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5"). InnerVolumeSpecName "kube-api-access-r6d22". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.718838 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.730570 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.739495 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.765705547 podStartE2EDuration="15.739446063s" podCreationTimestamp="2025-12-04 06:29:41 +0000 UTC" firstStartedPulling="2025-12-04 06:29:42.197522954 +0000 UTC m=+1237.810340660" lastFinishedPulling="2025-12-04 06:29:56.17126348 +0000 UTC m=+1251.784081176" observedRunningTime="2025-12-04 06:29:56.719269203 +0000 UTC m=+1252.332086909" watchObservedRunningTime="2025-12-04 06:29:56.739446063 +0000 UTC m=+1252.352263769" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.775961 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"875b6361-178d-40f6-b4d0-328dd939c7c1","Type":"ContainerDied","Data":"c1e23bf6b79f5904799eb8761c1e8f19c22113b29f3170f0f4d7496e938b57b6"} Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.776020 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfdebaa0-328f-4799-90f3-95f6bb41a0e5","Type":"ContainerStarted","Data":"e46112f4d6393a4e8349cc06399fa2a6ad2cfc746361f128236fb14a24abfa32"} Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.776038 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3b1ff9ea-abe5-4be6-b608-459a3f58b3d5","Type":"ContainerDied","Data":"87eff2cf29fe4f7bb3e85f379971d0280db4ae19b21b5a16205775fe5bfba62b"} Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.776817 4832 scope.go:117] "RemoveContainer" containerID="df25075882d3aa12187faabeadf1bdaf9bc0b4998db66a662db347de86b50993" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.793583 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.793634 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.793647 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6d22\" (UniqueName: \"kubernetes.io/projected/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-kube-api-access-r6d22\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.793660 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.793669 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.818603 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" (UID: "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.837756 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" (UID: "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.855536 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.867985 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.868429 4832 scope.go:117] "RemoveContainer" containerID="205dc7e0e8a170782eaced157939013eb65c23f71592c0afdd68a0bea3ecc85e" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.887852 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.891593 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-config-data" (OuterVolumeSpecName: "config-data") pod "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" (UID: "3b1ff9ea-abe5-4be6-b608-459a3f58b3d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.896621 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.896853 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.896934 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.896993 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.904142 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:56 crc kubenswrapper[4832]: E1204 06:29:56.904750 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" containerName="glance-httpd" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.904783 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" containerName="glance-httpd" Dec 04 06:29:56 crc kubenswrapper[4832]: E1204 06:29:56.904812 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875b6361-178d-40f6-b4d0-328dd939c7c1" containerName="glance-httpd" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.904821 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="875b6361-178d-40f6-b4d0-328dd939c7c1" containerName="glance-httpd" Dec 04 06:29:56 crc kubenswrapper[4832]: E1204 06:29:56.904842 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" containerName="glance-log" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.904849 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" containerName="glance-log" Dec 04 06:29:56 crc kubenswrapper[4832]: E1204 06:29:56.904860 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875b6361-178d-40f6-b4d0-328dd939c7c1" containerName="glance-log" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.904868 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="875b6361-178d-40f6-b4d0-328dd939c7c1" containerName="glance-log" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.905119 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" containerName="glance-log" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.905139 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="875b6361-178d-40f6-b4d0-328dd939c7c1" containerName="glance-log" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.905157 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="875b6361-178d-40f6-b4d0-328dd939c7c1" containerName="glance-httpd" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.905174 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" containerName="glance-httpd" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.907861 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.914422 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.920742 4832 scope.go:117] "RemoveContainer" containerID="2fed1ff76351fbaebeafd36d90b0d252502fa2ebd49439af7909080730f623f7" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.931760 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.932536 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.952677 4832 scope.go:117] "RemoveContainer" containerID="72840d5c7c99eb8eccb62d3df0b75b1b453c84818474041310afb94270f70700" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.998972 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44deec12-659d-4dcf-a08b-252f6a004f0b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.999332 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44deec12-659d-4dcf-a08b-252f6a004f0b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.999543 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44deec12-659d-4dcf-a08b-252f6a004f0b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.999663 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44deec12-659d-4dcf-a08b-252f6a004f0b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.999831 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:56 crc kubenswrapper[4832]: I1204 06:29:56.999926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqxvc\" (UniqueName: \"kubernetes.io/projected/44deec12-659d-4dcf-a08b-252f6a004f0b-kube-api-access-zqxvc\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.000064 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44deec12-659d-4dcf-a08b-252f6a004f0b-logs\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.000233 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44deec12-659d-4dcf-a08b-252f6a004f0b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.102920 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44deec12-659d-4dcf-a08b-252f6a004f0b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.103122 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44deec12-659d-4dcf-a08b-252f6a004f0b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.103235 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44deec12-659d-4dcf-a08b-252f6a004f0b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.103307 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44deec12-659d-4dcf-a08b-252f6a004f0b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.103426 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.103503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqxvc\" (UniqueName: \"kubernetes.io/projected/44deec12-659d-4dcf-a08b-252f6a004f0b-kube-api-access-zqxvc\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.103607 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44deec12-659d-4dcf-a08b-252f6a004f0b-logs\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.103713 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44deec12-659d-4dcf-a08b-252f6a004f0b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.108640 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.109552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44deec12-659d-4dcf-a08b-252f6a004f0b-logs\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.110363 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44deec12-659d-4dcf-a08b-252f6a004f0b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.111093 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44deec12-659d-4dcf-a08b-252f6a004f0b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.112715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44deec12-659d-4dcf-a08b-252f6a004f0b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.118948 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44deec12-659d-4dcf-a08b-252f6a004f0b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.126974 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44deec12-659d-4dcf-a08b-252f6a004f0b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.134850 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqxvc\" (UniqueName: \"kubernetes.io/projected/44deec12-659d-4dcf-a08b-252f6a004f0b-kube-api-access-zqxvc\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.170789 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.194844 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.202288 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"44deec12-659d-4dcf-a08b-252f6a004f0b\") " pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.209433 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.211298 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.222904 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.223403 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.225202 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.259636 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-564c-account-create-update-62wgk"] Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.305281 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ch62d"] Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.306834 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e43c1f-dcda-45c3-84aa-fe00d987d334-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.306922 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19e43c1f-dcda-45c3-84aa-fe00d987d334-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.306991 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e43c1f-dcda-45c3-84aa-fe00d987d334-logs\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.307039 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19e43c1f-dcda-45c3-84aa-fe00d987d334-scripts\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.307092 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e43c1f-dcda-45c3-84aa-fe00d987d334-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.307122 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47vb\" (UniqueName: \"kubernetes.io/projected/19e43c1f-dcda-45c3-84aa-fe00d987d334-kube-api-access-z47vb\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.307147 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.307196 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19e43c1f-dcda-45c3-84aa-fe00d987d334-config-data\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.350760 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vzgts"] Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.374897 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.380173 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5b4b-account-create-update-bfrhh"] Dec 04 06:29:57 crc kubenswrapper[4832]: W1204 06:29:57.396017 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3922f617_41ab_48fa_a501_caa685e933e0.slice/crio-ed4e003127a5bc0269547a3253b25912080c7681843455942173ef82bf04c738 WatchSource:0}: Error finding container ed4e003127a5bc0269547a3253b25912080c7681843455942173ef82bf04c738: Status 404 returned error can't find the container with id ed4e003127a5bc0269547a3253b25912080c7681843455942173ef82bf04c738 Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.408875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19e43c1f-dcda-45c3-84aa-fe00d987d334-scripts\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.409052 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e43c1f-dcda-45c3-84aa-fe00d987d334-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.409129 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z47vb\" (UniqueName: \"kubernetes.io/projected/19e43c1f-dcda-45c3-84aa-fe00d987d334-kube-api-access-z47vb\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.409248 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.409334 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19e43c1f-dcda-45c3-84aa-fe00d987d334-config-data\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.409494 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e43c1f-dcda-45c3-84aa-fe00d987d334-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.409631 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19e43c1f-dcda-45c3-84aa-fe00d987d334-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.409731 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e43c1f-dcda-45c3-84aa-fe00d987d334-logs\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.410239 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e43c1f-dcda-45c3-84aa-fe00d987d334-logs\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.410717 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.411470 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19e43c1f-dcda-45c3-84aa-fe00d987d334-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.411520 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0b65-account-create-update-r8vb2"] Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.425057 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e43c1f-dcda-45c3-84aa-fe00d987d334-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.431259 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19e43c1f-dcda-45c3-84aa-fe00d987d334-scripts\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.431964 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e43c1f-dcda-45c3-84aa-fe00d987d334-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.435773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19e43c1f-dcda-45c3-84aa-fe00d987d334-config-data\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.441874 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z47vb\" (UniqueName: \"kubernetes.io/projected/19e43c1f-dcda-45c3-84aa-fe00d987d334-kube-api-access-z47vb\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.445354 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-p55hz"] Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.509060 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"19e43c1f-dcda-45c3-84aa-fe00d987d334\") " pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.638529 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.796838 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" event={"ID":"3922f617-41ab-48fa-a501-caa685e933e0","Type":"ContainerStarted","Data":"ed4e003127a5bc0269547a3253b25912080c7681843455942173ef82bf04c738"} Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.808642 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ch62d" event={"ID":"e4561bf4-2c44-4350-8426-4353129c50cf","Type":"ContainerStarted","Data":"f0be97e61778021c13949c2a6010b4d15c8a5bba6f9978641cda0b11ce83a7fe"} Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.826781 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p55hz" event={"ID":"cc636c61-b131-450a-b60b-d205ea0a3c36","Type":"ContainerStarted","Data":"73da2879ce8b6926c24ee93ccc57d89f733568b68b55ba48d3be62bc9255fc21"} Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.846927 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfdebaa0-328f-4799-90f3-95f6bb41a0e5","Type":"ContainerStarted","Data":"bb5abae526c4825621682ec2d4d83e537c1384cf66eaf0acd55ad299cd6afcd8"} Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.849932 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vzgts" event={"ID":"a675f899-d638-40e8-a597-daa574df9e75","Type":"ContainerStarted","Data":"eea65a638c36986d10ac1fca6f89045561a86a780fbce8b8864d6ca75a50cc74"} Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.853138 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" event={"ID":"9531a1a5-90a1-487d-a8ea-532746866ae1","Type":"ContainerStarted","Data":"6eeba647c74d0a581b534b1c7a8dfa59e57c8f3438361ddab2e02d4bbed50c57"} Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.878660 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-564c-account-create-update-62wgk" event={"ID":"7f0967a7-c40d-45f5-b32f-ff14f40d5337","Type":"ContainerStarted","Data":"5a5e1ba145797712fddf7b348de4410068fcf6216ddac5c849d59868be561840"} Dec 04 06:29:57 crc kubenswrapper[4832]: I1204 06:29:57.922700 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-564c-account-create-update-62wgk" podStartSLOduration=6.922679181 podStartE2EDuration="6.922679181s" podCreationTimestamp="2025-12-04 06:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:29:57.904790917 +0000 UTC m=+1253.517608623" watchObservedRunningTime="2025-12-04 06:29:57.922679181 +0000 UTC m=+1253.535496887" Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.469937 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.584338 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 06:29:58 crc kubenswrapper[4832]: W1204 06:29:58.611236 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19e43c1f_dcda_45c3_84aa_fe00d987d334.slice/crio-5d007cb54f2be7f0eafbafbdb54adb98f611ce99642e0081963c77ff556f6e02 WatchSource:0}: Error finding container 5d007cb54f2be7f0eafbafbdb54adb98f611ce99642e0081963c77ff556f6e02: Status 404 returned error can't find the container with id 5d007cb54f2be7f0eafbafbdb54adb98f611ce99642e0081963c77ff556f6e02 Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.740473 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1ff9ea-abe5-4be6-b608-459a3f58b3d5" path="/var/lib/kubelet/pods/3b1ff9ea-abe5-4be6-b608-459a3f58b3d5/volumes" Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.742533 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="875b6361-178d-40f6-b4d0-328dd939c7c1" path="/var/lib/kubelet/pods/875b6361-178d-40f6-b4d0-328dd939c7c1/volumes" Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.892462 4832 generic.go:334] "Generic (PLEG): container finished" podID="9531a1a5-90a1-487d-a8ea-532746866ae1" containerID="011aa75a50cc64e419b29504acd836af8f2815a8797018b03b702f532459f8ec" exitCode=0 Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.892843 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" event={"ID":"9531a1a5-90a1-487d-a8ea-532746866ae1","Type":"ContainerDied","Data":"011aa75a50cc64e419b29504acd836af8f2815a8797018b03b702f532459f8ec"} Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.900647 4832 generic.go:334] "Generic (PLEG): container finished" podID="e4561bf4-2c44-4350-8426-4353129c50cf" containerID="3f0596a5f682a544afc50adfeb9796e44bf6f456ab1458a4ea5a1c9e141b7e5c" exitCode=0 Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.900756 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ch62d" event={"ID":"e4561bf4-2c44-4350-8426-4353129c50cf","Type":"ContainerDied","Data":"3f0596a5f682a544afc50adfeb9796e44bf6f456ab1458a4ea5a1c9e141b7e5c"} Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.907778 4832 generic.go:334] "Generic (PLEG): container finished" podID="a675f899-d638-40e8-a597-daa574df9e75" containerID="f00930ef4636333fb39683a7caf25dcd248e97285f840eec1315cd68224d2120" exitCode=0 Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.907932 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vzgts" event={"ID":"a675f899-d638-40e8-a597-daa574df9e75","Type":"ContainerDied","Data":"f00930ef4636333fb39683a7caf25dcd248e97285f840eec1315cd68224d2120"} Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.927833 4832 generic.go:334] "Generic (PLEG): container finished" podID="3922f617-41ab-48fa-a501-caa685e933e0" containerID="ecd076b1e891728d2757fbe9b9ffaaffa76d8a3ab7dfd109033c5cd99e002c63" exitCode=0 Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.927921 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" event={"ID":"3922f617-41ab-48fa-a501-caa685e933e0","Type":"ContainerDied","Data":"ecd076b1e891728d2757fbe9b9ffaaffa76d8a3ab7dfd109033c5cd99e002c63"} Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.931335 4832 generic.go:334] "Generic (PLEG): container finished" podID="cc636c61-b131-450a-b60b-d205ea0a3c36" containerID="7ac4b67a414bf1356549e551fe6a70463d772bb4b325013d4c798be259a2ac81" exitCode=0 Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.931409 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p55hz" event={"ID":"cc636c61-b131-450a-b60b-d205ea0a3c36","Type":"ContainerDied","Data":"7ac4b67a414bf1356549e551fe6a70463d772bb4b325013d4c798be259a2ac81"} Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.939346 4832 generic.go:334] "Generic (PLEG): container finished" podID="7f0967a7-c40d-45f5-b32f-ff14f40d5337" containerID="f2fcf9d889fff4f05b6ad596fd02d9f873f5b31e52fc0b8dac4f1d8899287264" exitCode=0 Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.939616 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-564c-account-create-update-62wgk" event={"ID":"7f0967a7-c40d-45f5-b32f-ff14f40d5337","Type":"ContainerDied","Data":"f2fcf9d889fff4f05b6ad596fd02d9f873f5b31e52fc0b8dac4f1d8899287264"} Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.946239 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"19e43c1f-dcda-45c3-84aa-fe00d987d334","Type":"ContainerStarted","Data":"5d007cb54f2be7f0eafbafbdb54adb98f611ce99642e0081963c77ff556f6e02"} Dec 04 06:29:58 crc kubenswrapper[4832]: I1204 06:29:58.965172 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44deec12-659d-4dcf-a08b-252f6a004f0b","Type":"ContainerStarted","Data":"b6cf6fa2813e2b721a85ef87399699520b34579253e89929cad559497d58054c"} Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.011054 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44deec12-659d-4dcf-a08b-252f6a004f0b","Type":"ContainerStarted","Data":"14a56cf162daffe0cdce0ecac5ad590c59abd96dfe5bbfc1e7545f2ea92ff1ab"} Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.011683 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44deec12-659d-4dcf-a08b-252f6a004f0b","Type":"ContainerStarted","Data":"3848f42e8d3b67cb8fe5114916c3f60e9c94cc9866e781667f42f1d386a79f21"} Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.016725 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"19e43c1f-dcda-45c3-84aa-fe00d987d334","Type":"ContainerStarted","Data":"bbebe2639a4e626a5693796c8f1b6d76c4a98470984d38c1dc7bb806c254e67d"} Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.021521 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfdebaa0-328f-4799-90f3-95f6bb41a0e5","Type":"ContainerStarted","Data":"7b157a75e760ff4fef7d01c3e7c9189a52f63a723d17de390f90d264e0babe2a"} Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.051756 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.051730589 podStartE2EDuration="4.051730589s" podCreationTimestamp="2025-12-04 06:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:30:00.034910012 +0000 UTC m=+1255.647727728" watchObservedRunningTime="2025-12-04 06:30:00.051730589 +0000 UTC m=+1255.664548295" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.169420 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9"] Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.171538 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.174314 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.176082 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.186270 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9"] Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.226273 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd89528b-85d2-4125-acf4-1a101323819a-secret-volume\") pod \"collect-profiles-29413830-b4vv9\" (UID: \"cd89528b-85d2-4125-acf4-1a101323819a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.226973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd89528b-85d2-4125-acf4-1a101323819a-config-volume\") pod \"collect-profiles-29413830-b4vv9\" (UID: \"cd89528b-85d2-4125-acf4-1a101323819a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.227376 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw659\" (UniqueName: \"kubernetes.io/projected/cd89528b-85d2-4125-acf4-1a101323819a-kube-api-access-kw659\") pod \"collect-profiles-29413830-b4vv9\" (UID: \"cd89528b-85d2-4125-acf4-1a101323819a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.329059 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw659\" (UniqueName: \"kubernetes.io/projected/cd89528b-85d2-4125-acf4-1a101323819a-kube-api-access-kw659\") pod \"collect-profiles-29413830-b4vv9\" (UID: \"cd89528b-85d2-4125-acf4-1a101323819a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.329165 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd89528b-85d2-4125-acf4-1a101323819a-secret-volume\") pod \"collect-profiles-29413830-b4vv9\" (UID: \"cd89528b-85d2-4125-acf4-1a101323819a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.329240 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd89528b-85d2-4125-acf4-1a101323819a-config-volume\") pod \"collect-profiles-29413830-b4vv9\" (UID: \"cd89528b-85d2-4125-acf4-1a101323819a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.330818 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd89528b-85d2-4125-acf4-1a101323819a-config-volume\") pod \"collect-profiles-29413830-b4vv9\" (UID: \"cd89528b-85d2-4125-acf4-1a101323819a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.350434 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd89528b-85d2-4125-acf4-1a101323819a-secret-volume\") pod \"collect-profiles-29413830-b4vv9\" (UID: \"cd89528b-85d2-4125-acf4-1a101323819a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.350580 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw659\" (UniqueName: \"kubernetes.io/projected/cd89528b-85d2-4125-acf4-1a101323819a-kube-api-access-kw659\") pod \"collect-profiles-29413830-b4vv9\" (UID: \"cd89528b-85d2-4125-acf4-1a101323819a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.510490 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.581167 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vzgts" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.639993 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a675f899-d638-40e8-a597-daa574df9e75-operator-scripts\") pod \"a675f899-d638-40e8-a597-daa574df9e75\" (UID: \"a675f899-d638-40e8-a597-daa574df9e75\") " Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.640144 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwv47\" (UniqueName: \"kubernetes.io/projected/a675f899-d638-40e8-a597-daa574df9e75-kube-api-access-nwv47\") pod \"a675f899-d638-40e8-a597-daa574df9e75\" (UID: \"a675f899-d638-40e8-a597-daa574df9e75\") " Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.640698 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a675f899-d638-40e8-a597-daa574df9e75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a675f899-d638-40e8-a597-daa574df9e75" (UID: "a675f899-d638-40e8-a597-daa574df9e75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.646495 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a675f899-d638-40e8-a597-daa574df9e75-kube-api-access-nwv47" (OuterVolumeSpecName: "kube-api-access-nwv47") pod "a675f899-d638-40e8-a597-daa574df9e75" (UID: "a675f899-d638-40e8-a597-daa574df9e75"). InnerVolumeSpecName "kube-api-access-nwv47". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.743959 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a675f899-d638-40e8-a597-daa574df9e75-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.744003 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwv47\" (UniqueName: \"kubernetes.io/projected/a675f899-d638-40e8-a597-daa574df9e75-kube-api-access-nwv47\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.812633 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-564c-account-create-update-62wgk" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.848209 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.857961 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p55hz" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.876216 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ch62d" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.884816 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.947841 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp4xv\" (UniqueName: \"kubernetes.io/projected/9531a1a5-90a1-487d-a8ea-532746866ae1-kube-api-access-lp4xv\") pod \"9531a1a5-90a1-487d-a8ea-532746866ae1\" (UID: \"9531a1a5-90a1-487d-a8ea-532746866ae1\") " Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.948000 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0967a7-c40d-45f5-b32f-ff14f40d5337-operator-scripts\") pod \"7f0967a7-c40d-45f5-b32f-ff14f40d5337\" (UID: \"7f0967a7-c40d-45f5-b32f-ff14f40d5337\") " Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.948126 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8clqf\" (UniqueName: \"kubernetes.io/projected/7f0967a7-c40d-45f5-b32f-ff14f40d5337-kube-api-access-8clqf\") pod \"7f0967a7-c40d-45f5-b32f-ff14f40d5337\" (UID: \"7f0967a7-c40d-45f5-b32f-ff14f40d5337\") " Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.948175 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9531a1a5-90a1-487d-a8ea-532746866ae1-operator-scripts\") pod \"9531a1a5-90a1-487d-a8ea-532746866ae1\" (UID: \"9531a1a5-90a1-487d-a8ea-532746866ae1\") " Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.950586 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0967a7-c40d-45f5-b32f-ff14f40d5337-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f0967a7-c40d-45f5-b32f-ff14f40d5337" (UID: "7f0967a7-c40d-45f5-b32f-ff14f40d5337"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.951513 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9531a1a5-90a1-487d-a8ea-532746866ae1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9531a1a5-90a1-487d-a8ea-532746866ae1" (UID: "9531a1a5-90a1-487d-a8ea-532746866ae1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.955803 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0967a7-c40d-45f5-b32f-ff14f40d5337-kube-api-access-8clqf" (OuterVolumeSpecName: "kube-api-access-8clqf") pod "7f0967a7-c40d-45f5-b32f-ff14f40d5337" (UID: "7f0967a7-c40d-45f5-b32f-ff14f40d5337"). InnerVolumeSpecName "kube-api-access-8clqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:00 crc kubenswrapper[4832]: I1204 06:30:00.968184 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9531a1a5-90a1-487d-a8ea-532746866ae1-kube-api-access-lp4xv" (OuterVolumeSpecName: "kube-api-access-lp4xv") pod "9531a1a5-90a1-487d-a8ea-532746866ae1" (UID: "9531a1a5-90a1-487d-a8ea-532746866ae1"). InnerVolumeSpecName "kube-api-access-lp4xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.051090 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjx6j\" (UniqueName: \"kubernetes.io/projected/3922f617-41ab-48fa-a501-caa685e933e0-kube-api-access-sjx6j\") pod \"3922f617-41ab-48fa-a501-caa685e933e0\" (UID: \"3922f617-41ab-48fa-a501-caa685e933e0\") " Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.051194 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb56q\" (UniqueName: \"kubernetes.io/projected/cc636c61-b131-450a-b60b-d205ea0a3c36-kube-api-access-tb56q\") pod \"cc636c61-b131-450a-b60b-d205ea0a3c36\" (UID: \"cc636c61-b131-450a-b60b-d205ea0a3c36\") " Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.051349 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxn6g\" (UniqueName: \"kubernetes.io/projected/e4561bf4-2c44-4350-8426-4353129c50cf-kube-api-access-lxn6g\") pod \"e4561bf4-2c44-4350-8426-4353129c50cf\" (UID: \"e4561bf4-2c44-4350-8426-4353129c50cf\") " Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.051445 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc636c61-b131-450a-b60b-d205ea0a3c36-operator-scripts\") pod \"cc636c61-b131-450a-b60b-d205ea0a3c36\" (UID: \"cc636c61-b131-450a-b60b-d205ea0a3c36\") " Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.051485 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4561bf4-2c44-4350-8426-4353129c50cf-operator-scripts\") pod \"e4561bf4-2c44-4350-8426-4353129c50cf\" (UID: \"e4561bf4-2c44-4350-8426-4353129c50cf\") " Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.051528 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3922f617-41ab-48fa-a501-caa685e933e0-operator-scripts\") pod \"3922f617-41ab-48fa-a501-caa685e933e0\" (UID: \"3922f617-41ab-48fa-a501-caa685e933e0\") " Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.052956 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc636c61-b131-450a-b60b-d205ea0a3c36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc636c61-b131-450a-b60b-d205ea0a3c36" (UID: "cc636c61-b131-450a-b60b-d205ea0a3c36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.053413 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4561bf4-2c44-4350-8426-4353129c50cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4561bf4-2c44-4350-8426-4353129c50cf" (UID: "e4561bf4-2c44-4350-8426-4353129c50cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.056358 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0967a7-c40d-45f5-b32f-ff14f40d5337-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.056402 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8clqf\" (UniqueName: \"kubernetes.io/projected/7f0967a7-c40d-45f5-b32f-ff14f40d5337-kube-api-access-8clqf\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.056416 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9531a1a5-90a1-487d-a8ea-532746866ae1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.056430 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc636c61-b131-450a-b60b-d205ea0a3c36-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.056440 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4561bf4-2c44-4350-8426-4353129c50cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.056451 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp4xv\" (UniqueName: \"kubernetes.io/projected/9531a1a5-90a1-487d-a8ea-532746866ae1-kube-api-access-lp4xv\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.056774 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3922f617-41ab-48fa-a501-caa685e933e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3922f617-41ab-48fa-a501-caa685e933e0" (UID: "3922f617-41ab-48fa-a501-caa685e933e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.057935 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3922f617-41ab-48fa-a501-caa685e933e0-kube-api-access-sjx6j" (OuterVolumeSpecName: "kube-api-access-sjx6j") pod "3922f617-41ab-48fa-a501-caa685e933e0" (UID: "3922f617-41ab-48fa-a501-caa685e933e0"). InnerVolumeSpecName "kube-api-access-sjx6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.059877 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4561bf4-2c44-4350-8426-4353129c50cf-kube-api-access-lxn6g" (OuterVolumeSpecName: "kube-api-access-lxn6g") pod "e4561bf4-2c44-4350-8426-4353129c50cf" (UID: "e4561bf4-2c44-4350-8426-4353129c50cf"). InnerVolumeSpecName "kube-api-access-lxn6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.060215 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc636c61-b131-450a-b60b-d205ea0a3c36-kube-api-access-tb56q" (OuterVolumeSpecName: "kube-api-access-tb56q") pod "cc636c61-b131-450a-b60b-d205ea0a3c36" (UID: "cc636c61-b131-450a-b60b-d205ea0a3c36"). InnerVolumeSpecName "kube-api-access-tb56q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.079454 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.080410 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b65-account-create-update-r8vb2" event={"ID":"9531a1a5-90a1-487d-a8ea-532746866ae1","Type":"ContainerDied","Data":"6eeba647c74d0a581b534b1c7a8dfa59e57c8f3438361ddab2e02d4bbed50c57"} Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.080457 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eeba647c74d0a581b534b1c7a8dfa59e57c8f3438361ddab2e02d4bbed50c57" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.088011 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-p55hz" event={"ID":"cc636c61-b131-450a-b60b-d205ea0a3c36","Type":"ContainerDied","Data":"73da2879ce8b6926c24ee93ccc57d89f733568b68b55ba48d3be62bc9255fc21"} Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.088048 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73da2879ce8b6926c24ee93ccc57d89f733568b68b55ba48d3be62bc9255fc21" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.088107 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-p55hz" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.111183 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-564c-account-create-update-62wgk" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.111212 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-564c-account-create-update-62wgk" event={"ID":"7f0967a7-c40d-45f5-b32f-ff14f40d5337","Type":"ContainerDied","Data":"5a5e1ba145797712fddf7b348de4410068fcf6216ddac5c849d59868be561840"} Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.111312 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a5e1ba145797712fddf7b348de4410068fcf6216ddac5c849d59868be561840" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.115009 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"19e43c1f-dcda-45c3-84aa-fe00d987d334","Type":"ContainerStarted","Data":"dcf5cd1ec7e2482558c2e02fa13cfa323809f0852e92b39b0b551efcc60018da"} Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.125621 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfdebaa0-328f-4799-90f3-95f6bb41a0e5","Type":"ContainerStarted","Data":"c03e7beaaf0ef2673f9c8285e093777c6a352bdd788f4b95d5a1eeba5f902f39"} Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.125769 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.129939 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vzgts" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.130413 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vzgts" event={"ID":"a675f899-d638-40e8-a597-daa574df9e75","Type":"ContainerDied","Data":"eea65a638c36986d10ac1fca6f89045561a86a780fbce8b8864d6ca75a50cc74"} Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.130439 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eea65a638c36986d10ac1fca6f89045561a86a780fbce8b8864d6ca75a50cc74" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.132167 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.132170 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5b4b-account-create-update-bfrhh" event={"ID":"3922f617-41ab-48fa-a501-caa685e933e0","Type":"ContainerDied","Data":"ed4e003127a5bc0269547a3253b25912080c7681843455942173ef82bf04c738"} Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.132310 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed4e003127a5bc0269547a3253b25912080c7681843455942173ef82bf04c738" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.139775 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ch62d" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.143675 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ch62d" event={"ID":"e4561bf4-2c44-4350-8426-4353129c50cf","Type":"ContainerDied","Data":"f0be97e61778021c13949c2a6010b4d15c8a5bba6f9978641cda0b11ce83a7fe"} Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.143760 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0be97e61778021c13949c2a6010b4d15c8a5bba6f9978641cda0b11ce83a7fe" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.163506 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.163484664 podStartE2EDuration="4.163484664s" podCreationTimestamp="2025-12-04 06:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:30:01.148748728 +0000 UTC m=+1256.761566454" watchObservedRunningTime="2025-12-04 06:30:01.163484664 +0000 UTC m=+1256.776302370" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.164010 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjx6j\" (UniqueName: \"kubernetes.io/projected/3922f617-41ab-48fa-a501-caa685e933e0-kube-api-access-sjx6j\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.164048 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb56q\" (UniqueName: \"kubernetes.io/projected/cc636c61-b131-450a-b60b-d205ea0a3c36-kube-api-access-tb56q\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.164062 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxn6g\" (UniqueName: \"kubernetes.io/projected/e4561bf4-2c44-4350-8426-4353129c50cf-kube-api-access-lxn6g\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.164080 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3922f617-41ab-48fa-a501-caa685e933e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.194471 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.334494979 podStartE2EDuration="13.194447272s" podCreationTimestamp="2025-12-04 06:29:48 +0000 UTC" firstStartedPulling="2025-12-04 06:29:49.592720011 +0000 UTC m=+1245.205537717" lastFinishedPulling="2025-12-04 06:30:00.452672304 +0000 UTC m=+1256.065490010" observedRunningTime="2025-12-04 06:30:01.190689849 +0000 UTC m=+1256.803507555" watchObservedRunningTime="2025-12-04 06:30:01.194447272 +0000 UTC m=+1256.807264988" Dec 04 06:30:01 crc kubenswrapper[4832]: I1204 06:30:01.302082 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9"] Dec 04 06:30:02 crc kubenswrapper[4832]: I1204 06:30:02.151963 4832 generic.go:334] "Generic (PLEG): container finished" podID="cd89528b-85d2-4125-acf4-1a101323819a" containerID="174ec3940d55b507c8dd9634917ee301400b3bec6bb5ada7456cee9dab9dbfec" exitCode=0 Dec 04 06:30:02 crc kubenswrapper[4832]: I1204 06:30:02.152016 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" event={"ID":"cd89528b-85d2-4125-acf4-1a101323819a","Type":"ContainerDied","Data":"174ec3940d55b507c8dd9634917ee301400b3bec6bb5ada7456cee9dab9dbfec"} Dec 04 06:30:02 crc kubenswrapper[4832]: I1204 06:30:02.152596 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" event={"ID":"cd89528b-85d2-4125-acf4-1a101323819a","Type":"ContainerStarted","Data":"609b07f2a2a6918e0689e19f9b4d30272370a80e388300908ade1926636080a9"} Dec 04 06:30:03 crc kubenswrapper[4832]: I1204 06:30:03.551326 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:03 crc kubenswrapper[4832]: I1204 06:30:03.614027 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd89528b-85d2-4125-acf4-1a101323819a-config-volume\") pod \"cd89528b-85d2-4125-acf4-1a101323819a\" (UID: \"cd89528b-85d2-4125-acf4-1a101323819a\") " Dec 04 06:30:03 crc kubenswrapper[4832]: I1204 06:30:03.614225 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd89528b-85d2-4125-acf4-1a101323819a-secret-volume\") pod \"cd89528b-85d2-4125-acf4-1a101323819a\" (UID: \"cd89528b-85d2-4125-acf4-1a101323819a\") " Dec 04 06:30:03 crc kubenswrapper[4832]: I1204 06:30:03.614679 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw659\" (UniqueName: \"kubernetes.io/projected/cd89528b-85d2-4125-acf4-1a101323819a-kube-api-access-kw659\") pod \"cd89528b-85d2-4125-acf4-1a101323819a\" (UID: \"cd89528b-85d2-4125-acf4-1a101323819a\") " Dec 04 06:30:03 crc kubenswrapper[4832]: I1204 06:30:03.615756 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd89528b-85d2-4125-acf4-1a101323819a-config-volume" (OuterVolumeSpecName: "config-volume") pod "cd89528b-85d2-4125-acf4-1a101323819a" (UID: "cd89528b-85d2-4125-acf4-1a101323819a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:03 crc kubenswrapper[4832]: I1204 06:30:03.763458 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd89528b-85d2-4125-acf4-1a101323819a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cd89528b-85d2-4125-acf4-1a101323819a" (UID: "cd89528b-85d2-4125-acf4-1a101323819a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:03 crc kubenswrapper[4832]: I1204 06:30:03.763673 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd89528b-85d2-4125-acf4-1a101323819a-kube-api-access-kw659" (OuterVolumeSpecName: "kube-api-access-kw659") pod "cd89528b-85d2-4125-acf4-1a101323819a" (UID: "cd89528b-85d2-4125-acf4-1a101323819a"). InnerVolumeSpecName "kube-api-access-kw659". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:03 crc kubenswrapper[4832]: I1204 06:30:03.767371 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw659\" (UniqueName: \"kubernetes.io/projected/cd89528b-85d2-4125-acf4-1a101323819a-kube-api-access-kw659\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:03 crc kubenswrapper[4832]: I1204 06:30:03.767441 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd89528b-85d2-4125-acf4-1a101323819a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:03 crc kubenswrapper[4832]: I1204 06:30:03.767455 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd89528b-85d2-4125-acf4-1a101323819a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:04 crc kubenswrapper[4832]: I1204 06:30:04.174743 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" event={"ID":"cd89528b-85d2-4125-acf4-1a101323819a","Type":"ContainerDied","Data":"609b07f2a2a6918e0689e19f9b4d30272370a80e388300908ade1926636080a9"} Dec 04 06:30:04 crc kubenswrapper[4832]: I1204 06:30:04.175124 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="609b07f2a2a6918e0689e19f9b4d30272370a80e388300908ade1926636080a9" Dec 04 06:30:04 crc kubenswrapper[4832]: I1204 06:30:04.174819 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9" Dec 04 06:30:05 crc kubenswrapper[4832]: I1204 06:30:05.362451 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:30:05 crc kubenswrapper[4832]: I1204 06:30:05.362546 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:30:05 crc kubenswrapper[4832]: I1204 06:30:05.975913 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:05 crc kubenswrapper[4832]: I1204 06:30:05.976515 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="ceilometer-central-agent" containerID="cri-o://e46112f4d6393a4e8349cc06399fa2a6ad2cfc746361f128236fb14a24abfa32" gracePeriod=30 Dec 04 06:30:05 crc kubenswrapper[4832]: I1204 06:30:05.976587 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="sg-core" containerID="cri-o://7b157a75e760ff4fef7d01c3e7c9189a52f63a723d17de390f90d264e0babe2a" gracePeriod=30 Dec 04 06:30:05 crc kubenswrapper[4832]: I1204 06:30:05.976695 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="ceilometer-notification-agent" containerID="cri-o://bb5abae526c4825621682ec2d4d83e537c1384cf66eaf0acd55ad299cd6afcd8" gracePeriod=30 Dec 04 06:30:05 crc kubenswrapper[4832]: I1204 06:30:05.976686 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="proxy-httpd" containerID="cri-o://c03e7beaaf0ef2673f9c8285e093777c6a352bdd788f4b95d5a1eeba5f902f39" gracePeriod=30 Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.220784 4832 generic.go:334] "Generic (PLEG): container finished" podID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerID="7b157a75e760ff4fef7d01c3e7c9189a52f63a723d17de390f90d264e0babe2a" exitCode=2 Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.220841 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfdebaa0-328f-4799-90f3-95f6bb41a0e5","Type":"ContainerDied","Data":"7b157a75e760ff4fef7d01c3e7c9189a52f63a723d17de390f90d264e0babe2a"} Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.683268 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-587db8c9db-9blcn" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.874778 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rnc6z"] Dec 04 06:30:06 crc kubenswrapper[4832]: E1204 06:30:06.875249 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0967a7-c40d-45f5-b32f-ff14f40d5337" containerName="mariadb-account-create-update" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875268 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0967a7-c40d-45f5-b32f-ff14f40d5337" containerName="mariadb-account-create-update" Dec 04 06:30:06 crc kubenswrapper[4832]: E1204 06:30:06.875286 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc636c61-b131-450a-b60b-d205ea0a3c36" containerName="mariadb-database-create" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875300 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc636c61-b131-450a-b60b-d205ea0a3c36" containerName="mariadb-database-create" Dec 04 06:30:06 crc kubenswrapper[4832]: E1204 06:30:06.875322 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3922f617-41ab-48fa-a501-caa685e933e0" containerName="mariadb-account-create-update" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875330 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3922f617-41ab-48fa-a501-caa685e933e0" containerName="mariadb-account-create-update" Dec 04 06:30:06 crc kubenswrapper[4832]: E1204 06:30:06.875341 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4561bf4-2c44-4350-8426-4353129c50cf" containerName="mariadb-database-create" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875348 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4561bf4-2c44-4350-8426-4353129c50cf" containerName="mariadb-database-create" Dec 04 06:30:06 crc kubenswrapper[4832]: E1204 06:30:06.875357 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9531a1a5-90a1-487d-a8ea-532746866ae1" containerName="mariadb-account-create-update" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875364 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9531a1a5-90a1-487d-a8ea-532746866ae1" containerName="mariadb-account-create-update" Dec 04 06:30:06 crc kubenswrapper[4832]: E1204 06:30:06.875375 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a675f899-d638-40e8-a597-daa574df9e75" containerName="mariadb-database-create" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875410 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a675f899-d638-40e8-a597-daa574df9e75" containerName="mariadb-database-create" Dec 04 06:30:06 crc kubenswrapper[4832]: E1204 06:30:06.875432 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd89528b-85d2-4125-acf4-1a101323819a" containerName="collect-profiles" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875438 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd89528b-85d2-4125-acf4-1a101323819a" containerName="collect-profiles" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875669 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3922f617-41ab-48fa-a501-caa685e933e0" containerName="mariadb-account-create-update" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875695 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a675f899-d638-40e8-a597-daa574df9e75" containerName="mariadb-database-create" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875703 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9531a1a5-90a1-487d-a8ea-532746866ae1" containerName="mariadb-account-create-update" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875711 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd89528b-85d2-4125-acf4-1a101323819a" containerName="collect-profiles" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875724 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4561bf4-2c44-4350-8426-4353129c50cf" containerName="mariadb-database-create" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875732 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc636c61-b131-450a-b60b-d205ea0a3c36" containerName="mariadb-database-create" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.875742 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f0967a7-c40d-45f5-b32f-ff14f40d5337" containerName="mariadb-account-create-update" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.876479 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.880010 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.880183 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ckncc" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.880292 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 04 06:30:06 crc kubenswrapper[4832]: I1204 06:30:06.887451 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rnc6z"] Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.034764 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rnc6z\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.035232 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-config-data\") pod \"nova-cell0-conductor-db-sync-rnc6z\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.035302 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26fnw\" (UniqueName: \"kubernetes.io/projected/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-kube-api-access-26fnw\") pod \"nova-cell0-conductor-db-sync-rnc6z\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.035364 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-scripts\") pod \"nova-cell0-conductor-db-sync-rnc6z\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.137651 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rnc6z\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.137726 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-config-data\") pod \"nova-cell0-conductor-db-sync-rnc6z\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.137769 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26fnw\" (UniqueName: \"kubernetes.io/projected/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-kube-api-access-26fnw\") pod \"nova-cell0-conductor-db-sync-rnc6z\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.137799 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-scripts\") pod \"nova-cell0-conductor-db-sync-rnc6z\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.148802 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-config-data\") pod \"nova-cell0-conductor-db-sync-rnc6z\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.148827 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rnc6z\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.150029 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-scripts\") pod \"nova-cell0-conductor-db-sync-rnc6z\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.167817 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26fnw\" (UniqueName: \"kubernetes.io/projected/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-kube-api-access-26fnw\") pod \"nova-cell0-conductor-db-sync-rnc6z\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.202523 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.250799 4832 generic.go:334] "Generic (PLEG): container finished" podID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerID="c03e7beaaf0ef2673f9c8285e093777c6a352bdd788f4b95d5a1eeba5f902f39" exitCode=0 Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.250834 4832 generic.go:334] "Generic (PLEG): container finished" podID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerID="bb5abae526c4825621682ec2d4d83e537c1384cf66eaf0acd55ad299cd6afcd8" exitCode=0 Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.250844 4832 generic.go:334] "Generic (PLEG): container finished" podID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerID="e46112f4d6393a4e8349cc06399fa2a6ad2cfc746361f128236fb14a24abfa32" exitCode=0 Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.250859 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfdebaa0-328f-4799-90f3-95f6bb41a0e5","Type":"ContainerDied","Data":"c03e7beaaf0ef2673f9c8285e093777c6a352bdd788f4b95d5a1eeba5f902f39"} Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.250918 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfdebaa0-328f-4799-90f3-95f6bb41a0e5","Type":"ContainerDied","Data":"bb5abae526c4825621682ec2d4d83e537c1384cf66eaf0acd55ad299cd6afcd8"} Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.250933 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfdebaa0-328f-4799-90f3-95f6bb41a0e5","Type":"ContainerDied","Data":"e46112f4d6393a4e8349cc06399fa2a6ad2cfc746361f128236fb14a24abfa32"} Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.376546 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.377002 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.422958 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.463756 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.640615 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.640701 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.766670 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rnc6z"] Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.787349 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 04 06:30:07 crc kubenswrapper[4832]: I1204 06:30:07.800324 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.263695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dfdebaa0-328f-4799-90f3-95f6bb41a0e5","Type":"ContainerDied","Data":"2ca5d6dd74136b50251c2e936d3bee1aa7a1e496d4dfae627c8f9fa8aab56555"} Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.263979 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ca5d6dd74136b50251c2e936d3bee1aa7a1e496d4dfae627c8f9fa8aab56555" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.265157 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rnc6z" event={"ID":"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49","Type":"ContainerStarted","Data":"1e49009724018c252866a258bb007db14ba10f5ebf2a9cce709f0965bc6b8817"} Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.265603 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.265772 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.265797 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.265809 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.281245 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.394568 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-log-httpd\") pod \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.394800 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-combined-ca-bundle\") pod \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.394943 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-run-httpd\") pod \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.394976 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-sg-core-conf-yaml\") pod \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.395004 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw7gc\" (UniqueName: \"kubernetes.io/projected/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-kube-api-access-mw7gc\") pod \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.395038 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-config-data\") pod \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.395080 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-scripts\") pod \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\" (UID: \"dfdebaa0-328f-4799-90f3-95f6bb41a0e5\") " Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.399051 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dfdebaa0-328f-4799-90f3-95f6bb41a0e5" (UID: "dfdebaa0-328f-4799-90f3-95f6bb41a0e5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.400234 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dfdebaa0-328f-4799-90f3-95f6bb41a0e5" (UID: "dfdebaa0-328f-4799-90f3-95f6bb41a0e5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.413744 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-scripts" (OuterVolumeSpecName: "scripts") pod "dfdebaa0-328f-4799-90f3-95f6bb41a0e5" (UID: "dfdebaa0-328f-4799-90f3-95f6bb41a0e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.416903 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-kube-api-access-mw7gc" (OuterVolumeSpecName: "kube-api-access-mw7gc") pod "dfdebaa0-328f-4799-90f3-95f6bb41a0e5" (UID: "dfdebaa0-328f-4799-90f3-95f6bb41a0e5"). InnerVolumeSpecName "kube-api-access-mw7gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.439040 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dfdebaa0-328f-4799-90f3-95f6bb41a0e5" (UID: "dfdebaa0-328f-4799-90f3-95f6bb41a0e5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.493075 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfdebaa0-328f-4799-90f3-95f6bb41a0e5" (UID: "dfdebaa0-328f-4799-90f3-95f6bb41a0e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.498178 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.498222 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.498239 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw7gc\" (UniqueName: \"kubernetes.io/projected/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-kube-api-access-mw7gc\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.498252 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.498263 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.498274 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.571553 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-config-data" (OuterVolumeSpecName: "config-data") pod "dfdebaa0-328f-4799-90f3-95f6bb41a0e5" (UID: "dfdebaa0-328f-4799-90f3-95f6bb41a0e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:08 crc kubenswrapper[4832]: I1204 06:30:08.599711 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfdebaa0-328f-4799-90f3-95f6bb41a0e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.281519 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.313753 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.321971 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.341018 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:09 crc kubenswrapper[4832]: E1204 06:30:09.341748 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="ceilometer-central-agent" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.341768 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="ceilometer-central-agent" Dec 04 06:30:09 crc kubenswrapper[4832]: E1204 06:30:09.341791 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="ceilometer-notification-agent" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.341799 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="ceilometer-notification-agent" Dec 04 06:30:09 crc kubenswrapper[4832]: E1204 06:30:09.341810 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="proxy-httpd" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.341817 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="proxy-httpd" Dec 04 06:30:09 crc kubenswrapper[4832]: E1204 06:30:09.341841 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="sg-core" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.341847 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="sg-core" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.342013 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="ceilometer-notification-agent" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.342032 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="proxy-httpd" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.342042 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="ceilometer-central-agent" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.342064 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" containerName="sg-core" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.343814 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.350263 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.350721 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.387898 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.417508 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-scripts\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.417620 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-config-data\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.417711 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwgql\" (UniqueName: \"kubernetes.io/projected/7c101544-4bd4-49b7-8723-e6acd0b79f25-kube-api-access-bwgql\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.417739 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.417762 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.417812 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c101544-4bd4-49b7-8723-e6acd0b79f25-log-httpd\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.417843 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c101544-4bd4-49b7-8723-e6acd0b79f25-run-httpd\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.519857 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-scripts\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.520557 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-config-data\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.520651 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwgql\" (UniqueName: \"kubernetes.io/projected/7c101544-4bd4-49b7-8723-e6acd0b79f25-kube-api-access-bwgql\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.520706 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.520731 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.520817 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c101544-4bd4-49b7-8723-e6acd0b79f25-log-httpd\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.520877 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c101544-4bd4-49b7-8723-e6acd0b79f25-run-httpd\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.521698 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c101544-4bd4-49b7-8723-e6acd0b79f25-run-httpd\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.522110 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c101544-4bd4-49b7-8723-e6acd0b79f25-log-httpd\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.522773 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:09 crc kubenswrapper[4832]: E1204 06:30:09.523854 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-bwgql scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="7c101544-4bd4-49b7-8723-e6acd0b79f25" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.528833 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-scripts\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.529145 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-config-data\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.529627 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.529939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:09 crc kubenswrapper[4832]: I1204 06:30:09.553760 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwgql\" (UniqueName: \"kubernetes.io/projected/7c101544-4bd4-49b7-8723-e6acd0b79f25-kube-api-access-bwgql\") pod \"ceilometer-0\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " pod="openstack/ceilometer-0" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.289297 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.306098 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.440654 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwgql\" (UniqueName: \"kubernetes.io/projected/7c101544-4bd4-49b7-8723-e6acd0b79f25-kube-api-access-bwgql\") pod \"7c101544-4bd4-49b7-8723-e6acd0b79f25\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.440733 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-config-data\") pod \"7c101544-4bd4-49b7-8723-e6acd0b79f25\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.440781 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-sg-core-conf-yaml\") pod \"7c101544-4bd4-49b7-8723-e6acd0b79f25\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.440811 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-scripts\") pod \"7c101544-4bd4-49b7-8723-e6acd0b79f25\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.440991 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c101544-4bd4-49b7-8723-e6acd0b79f25-log-httpd\") pod \"7c101544-4bd4-49b7-8723-e6acd0b79f25\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.441045 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c101544-4bd4-49b7-8723-e6acd0b79f25-run-httpd\") pod \"7c101544-4bd4-49b7-8723-e6acd0b79f25\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.441088 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-combined-ca-bundle\") pod \"7c101544-4bd4-49b7-8723-e6acd0b79f25\" (UID: \"7c101544-4bd4-49b7-8723-e6acd0b79f25\") " Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.444921 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c101544-4bd4-49b7-8723-e6acd0b79f25-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7c101544-4bd4-49b7-8723-e6acd0b79f25" (UID: "7c101544-4bd4-49b7-8723-e6acd0b79f25"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.445150 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c101544-4bd4-49b7-8723-e6acd0b79f25-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7c101544-4bd4-49b7-8723-e6acd0b79f25" (UID: "7c101544-4bd4-49b7-8723-e6acd0b79f25"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.445333 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c101544-4bd4-49b7-8723-e6acd0b79f25-kube-api-access-bwgql" (OuterVolumeSpecName: "kube-api-access-bwgql") pod "7c101544-4bd4-49b7-8723-e6acd0b79f25" (UID: "7c101544-4bd4-49b7-8723-e6acd0b79f25"). InnerVolumeSpecName "kube-api-access-bwgql". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.446508 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-scripts" (OuterVolumeSpecName: "scripts") pod "7c101544-4bd4-49b7-8723-e6acd0b79f25" (UID: "7c101544-4bd4-49b7-8723-e6acd0b79f25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.447671 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c101544-4bd4-49b7-8723-e6acd0b79f25" (UID: "7c101544-4bd4-49b7-8723-e6acd0b79f25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.449031 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7c101544-4bd4-49b7-8723-e6acd0b79f25" (UID: "7c101544-4bd4-49b7-8723-e6acd0b79f25"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.452618 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-config-data" (OuterVolumeSpecName: "config-data") pod "7c101544-4bd4-49b7-8723-e6acd0b79f25" (UID: "7c101544-4bd4-49b7-8723-e6acd0b79f25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.543580 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwgql\" (UniqueName: \"kubernetes.io/projected/7c101544-4bd4-49b7-8723-e6acd0b79f25-kube-api-access-bwgql\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.543616 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.543627 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.543635 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.543645 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c101544-4bd4-49b7-8723-e6acd0b79f25-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.543653 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c101544-4bd4-49b7-8723-e6acd0b79f25-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.543661 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c101544-4bd4-49b7-8723-e6acd0b79f25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.743239 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfdebaa0-328f-4799-90f3-95f6bb41a0e5" path="/var/lib/kubelet/pods/dfdebaa0-328f-4799-90f3-95f6bb41a0e5/volumes" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.943466 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.943599 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.947993 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.948190 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.952139 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 04 06:30:10 crc kubenswrapper[4832]: I1204 06:30:10.953517 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.303081 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.354254 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.364293 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.412510 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.414991 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.421886 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.422029 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.429359 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.573926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd972708-f8d0-4685-b239-213b3247f19e-log-httpd\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.574042 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv5fz\" (UniqueName: \"kubernetes.io/projected/dd972708-f8d0-4685-b239-213b3247f19e-kube-api-access-sv5fz\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.574075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd972708-f8d0-4685-b239-213b3247f19e-run-httpd\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.574134 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-config-data\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.574224 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-scripts\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.574313 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.574333 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.676182 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd972708-f8d0-4685-b239-213b3247f19e-run-httpd\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.676253 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-config-data\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.676321 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-scripts\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.676379 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.676420 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.676450 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd972708-f8d0-4685-b239-213b3247f19e-log-httpd\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.676484 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv5fz\" (UniqueName: \"kubernetes.io/projected/dd972708-f8d0-4685-b239-213b3247f19e-kube-api-access-sv5fz\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.676954 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd972708-f8d0-4685-b239-213b3247f19e-log-httpd\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.676998 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd972708-f8d0-4685-b239-213b3247f19e-run-httpd\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.685421 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.686277 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-scripts\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.688013 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.701675 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv5fz\" (UniqueName: \"kubernetes.io/projected/dd972708-f8d0-4685-b239-213b3247f19e-kube-api-access-sv5fz\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.710006 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-config-data\") pod \"ceilometer-0\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " pod="openstack/ceilometer-0" Dec 04 06:30:11 crc kubenswrapper[4832]: I1204 06:30:11.753312 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.298401 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.314367 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.331430 4832 generic.go:334] "Generic (PLEG): container finished" podID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerID="e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6" exitCode=137 Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.331479 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587db8c9db-9blcn" event={"ID":"a6361378-b3ff-41c4-a77e-3bb4a1482984","Type":"ContainerDied","Data":"e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6"} Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.331511 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-587db8c9db-9blcn" event={"ID":"a6361378-b3ff-41c4-a77e-3bb4a1482984","Type":"ContainerDied","Data":"0c4236bf8642901bb753b205a81ffffdc00c5382448c49d79904a8433a58a45d"} Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.331531 4832 scope.go:117] "RemoveContainer" containerID="d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.331614 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-587db8c9db-9blcn" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.406497 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6361378-b3ff-41c4-a77e-3bb4a1482984-logs\") pod \"a6361378-b3ff-41c4-a77e-3bb4a1482984\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.406631 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccfwj\" (UniqueName: \"kubernetes.io/projected/a6361378-b3ff-41c4-a77e-3bb4a1482984-kube-api-access-ccfwj\") pod \"a6361378-b3ff-41c4-a77e-3bb4a1482984\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.406680 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6361378-b3ff-41c4-a77e-3bb4a1482984-scripts\") pod \"a6361378-b3ff-41c4-a77e-3bb4a1482984\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.406754 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6361378-b3ff-41c4-a77e-3bb4a1482984-config-data\") pod \"a6361378-b3ff-41c4-a77e-3bb4a1482984\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.406801 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-combined-ca-bundle\") pod \"a6361378-b3ff-41c4-a77e-3bb4a1482984\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.406919 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-horizon-tls-certs\") pod \"a6361378-b3ff-41c4-a77e-3bb4a1482984\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.406957 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-horizon-secret-key\") pod \"a6361378-b3ff-41c4-a77e-3bb4a1482984\" (UID: \"a6361378-b3ff-41c4-a77e-3bb4a1482984\") " Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.415576 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6361378-b3ff-41c4-a77e-3bb4a1482984-logs" (OuterVolumeSpecName: "logs") pod "a6361378-b3ff-41c4-a77e-3bb4a1482984" (UID: "a6361378-b3ff-41c4-a77e-3bb4a1482984"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.416784 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a6361378-b3ff-41c4-a77e-3bb4a1482984" (UID: "a6361378-b3ff-41c4-a77e-3bb4a1482984"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.420911 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6361378-b3ff-41c4-a77e-3bb4a1482984-kube-api-access-ccfwj" (OuterVolumeSpecName: "kube-api-access-ccfwj") pod "a6361378-b3ff-41c4-a77e-3bb4a1482984" (UID: "a6361378-b3ff-41c4-a77e-3bb4a1482984"). InnerVolumeSpecName "kube-api-access-ccfwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.452333 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6361378-b3ff-41c4-a77e-3bb4a1482984-config-data" (OuterVolumeSpecName: "config-data") pod "a6361378-b3ff-41c4-a77e-3bb4a1482984" (UID: "a6361378-b3ff-41c4-a77e-3bb4a1482984"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.452537 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6361378-b3ff-41c4-a77e-3bb4a1482984-scripts" (OuterVolumeSpecName: "scripts") pod "a6361378-b3ff-41c4-a77e-3bb4a1482984" (UID: "a6361378-b3ff-41c4-a77e-3bb4a1482984"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.467720 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6361378-b3ff-41c4-a77e-3bb4a1482984" (UID: "a6361378-b3ff-41c4-a77e-3bb4a1482984"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.504306 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "a6361378-b3ff-41c4-a77e-3bb4a1482984" (UID: "a6361378-b3ff-41c4-a77e-3bb4a1482984"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.509863 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.509902 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.509913 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6361378-b3ff-41c4-a77e-3bb4a1482984-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.509923 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccfwj\" (UniqueName: \"kubernetes.io/projected/a6361378-b3ff-41c4-a77e-3bb4a1482984-kube-api-access-ccfwj\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.509934 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6361378-b3ff-41c4-a77e-3bb4a1482984-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.509942 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6361378-b3ff-41c4-a77e-3bb4a1482984-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.509951 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6361378-b3ff-41c4-a77e-3bb4a1482984-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.698499 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-587db8c9db-9blcn"] Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.709643 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-587db8c9db-9blcn"] Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.736619 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c101544-4bd4-49b7-8723-e6acd0b79f25" path="/var/lib/kubelet/pods/7c101544-4bd4-49b7-8723-e6acd0b79f25/volumes" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.737848 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" path="/var/lib/kubelet/pods/a6361378-b3ff-41c4-a77e-3bb4a1482984/volumes" Dec 04 06:30:12 crc kubenswrapper[4832]: I1204 06:30:12.890030 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:17 crc kubenswrapper[4832]: I1204 06:30:17.388373 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd972708-f8d0-4685-b239-213b3247f19e","Type":"ContainerStarted","Data":"888e837b56ea27bb32bc9bb148f267dabd8071559c781da5c811a3de59d95427"} Dec 04 06:30:17 crc kubenswrapper[4832]: I1204 06:30:17.613668 4832 scope.go:117] "RemoveContainer" containerID="e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6" Dec 04 06:30:17 crc kubenswrapper[4832]: I1204 06:30:17.688213 4832 scope.go:117] "RemoveContainer" containerID="d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339" Dec 04 06:30:17 crc kubenswrapper[4832]: E1204 06:30:17.688637 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339\": container with ID starting with d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339 not found: ID does not exist" containerID="d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339" Dec 04 06:30:17 crc kubenswrapper[4832]: I1204 06:30:17.688681 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339"} err="failed to get container status \"d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339\": rpc error: code = NotFound desc = could not find container \"d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339\": container with ID starting with d083b7a3f5385d93f5b939878fd507841fbfc04d32891d7e0f5566a2009b1339 not found: ID does not exist" Dec 04 06:30:17 crc kubenswrapper[4832]: I1204 06:30:17.688705 4832 scope.go:117] "RemoveContainer" containerID="e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6" Dec 04 06:30:17 crc kubenswrapper[4832]: E1204 06:30:17.688996 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6\": container with ID starting with e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6 not found: ID does not exist" containerID="e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6" Dec 04 06:30:17 crc kubenswrapper[4832]: I1204 06:30:17.689039 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6"} err="failed to get container status \"e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6\": rpc error: code = NotFound desc = could not find container \"e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6\": container with ID starting with e0098beb9970378660f98bd17b1bede5c284155aa452761a84b67b8857b961c6 not found: ID does not exist" Dec 04 06:30:18 crc kubenswrapper[4832]: I1204 06:30:18.407307 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rnc6z" event={"ID":"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49","Type":"ContainerStarted","Data":"3f0375403c06179d4a3439ef48a3d7f37b4ee4c8b99d700f62f8f41eba064d98"} Dec 04 06:30:18 crc kubenswrapper[4832]: I1204 06:30:18.412897 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd972708-f8d0-4685-b239-213b3247f19e","Type":"ContainerStarted","Data":"df40bca24b59ffe8c6fdd28e1d20e7b38dd1e490517c7cafd53cd998aec483bd"} Dec 04 06:30:18 crc kubenswrapper[4832]: I1204 06:30:18.431609 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rnc6z" podStartSLOduration=2.51750917 podStartE2EDuration="12.431591331s" podCreationTimestamp="2025-12-04 06:30:06 +0000 UTC" firstStartedPulling="2025-12-04 06:30:07.803951221 +0000 UTC m=+1263.416768927" lastFinishedPulling="2025-12-04 06:30:17.718033382 +0000 UTC m=+1273.330851088" observedRunningTime="2025-12-04 06:30:18.423615663 +0000 UTC m=+1274.036433369" watchObservedRunningTime="2025-12-04 06:30:18.431591331 +0000 UTC m=+1274.044409027" Dec 04 06:30:20 crc kubenswrapper[4832]: I1204 06:30:20.441484 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd972708-f8d0-4685-b239-213b3247f19e","Type":"ContainerStarted","Data":"703d2eb9413661709caa8c6ef3aac10e5c19c14cfb18e345a582b4db1022c1bd"} Dec 04 06:30:21 crc kubenswrapper[4832]: I1204 06:30:21.454364 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd972708-f8d0-4685-b239-213b3247f19e","Type":"ContainerStarted","Data":"dc3c859747828abe19da5534685395d060a1e6e40dfc87c6618a797e8f59d384"} Dec 04 06:30:23 crc kubenswrapper[4832]: I1204 06:30:23.476812 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd972708-f8d0-4685-b239-213b3247f19e","Type":"ContainerStarted","Data":"99a472770723ce78feca4fca890f726fadbbef1a8a4cdf9763cb31dd7c775372"} Dec 04 06:30:23 crc kubenswrapper[4832]: I1204 06:30:23.477440 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 06:30:23 crc kubenswrapper[4832]: I1204 06:30:23.480889 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="proxy-httpd" containerID="cri-o://99a472770723ce78feca4fca890f726fadbbef1a8a4cdf9763cb31dd7c775372" gracePeriod=30 Dec 04 06:30:23 crc kubenswrapper[4832]: I1204 06:30:23.480648 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="sg-core" containerID="cri-o://dc3c859747828abe19da5534685395d060a1e6e40dfc87c6618a797e8f59d384" gracePeriod=30 Dec 04 06:30:23 crc kubenswrapper[4832]: I1204 06:30:23.481811 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="ceilometer-notification-agent" containerID="cri-o://703d2eb9413661709caa8c6ef3aac10e5c19c14cfb18e345a582b4db1022c1bd" gracePeriod=30 Dec 04 06:30:23 crc kubenswrapper[4832]: I1204 06:30:23.481885 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="ceilometer-central-agent" containerID="cri-o://df40bca24b59ffe8c6fdd28e1d20e7b38dd1e490517c7cafd53cd998aec483bd" gracePeriod=30 Dec 04 06:30:23 crc kubenswrapper[4832]: I1204 06:30:23.508182 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.461027472 podStartE2EDuration="12.508130837s" podCreationTimestamp="2025-12-04 06:30:11 +0000 UTC" firstStartedPulling="2025-12-04 06:30:17.438747896 +0000 UTC m=+1273.051565602" lastFinishedPulling="2025-12-04 06:30:22.485851261 +0000 UTC m=+1278.098668967" observedRunningTime="2025-12-04 06:30:23.501954333 +0000 UTC m=+1279.114772029" watchObservedRunningTime="2025-12-04 06:30:23.508130837 +0000 UTC m=+1279.120948543" Dec 04 06:30:24 crc kubenswrapper[4832]: I1204 06:30:24.494119 4832 generic.go:334] "Generic (PLEG): container finished" podID="dd972708-f8d0-4685-b239-213b3247f19e" containerID="99a472770723ce78feca4fca890f726fadbbef1a8a4cdf9763cb31dd7c775372" exitCode=0 Dec 04 06:30:24 crc kubenswrapper[4832]: I1204 06:30:24.494726 4832 generic.go:334] "Generic (PLEG): container finished" podID="dd972708-f8d0-4685-b239-213b3247f19e" containerID="dc3c859747828abe19da5534685395d060a1e6e40dfc87c6618a797e8f59d384" exitCode=2 Dec 04 06:30:24 crc kubenswrapper[4832]: I1204 06:30:24.494739 4832 generic.go:334] "Generic (PLEG): container finished" podID="dd972708-f8d0-4685-b239-213b3247f19e" containerID="703d2eb9413661709caa8c6ef3aac10e5c19c14cfb18e345a582b4db1022c1bd" exitCode=0 Dec 04 06:30:24 crc kubenswrapper[4832]: I1204 06:30:24.494197 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd972708-f8d0-4685-b239-213b3247f19e","Type":"ContainerDied","Data":"99a472770723ce78feca4fca890f726fadbbef1a8a4cdf9763cb31dd7c775372"} Dec 04 06:30:24 crc kubenswrapper[4832]: I1204 06:30:24.494774 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd972708-f8d0-4685-b239-213b3247f19e","Type":"ContainerDied","Data":"dc3c859747828abe19da5534685395d060a1e6e40dfc87c6618a797e8f59d384"} Dec 04 06:30:24 crc kubenswrapper[4832]: I1204 06:30:24.494787 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd972708-f8d0-4685-b239-213b3247f19e","Type":"ContainerDied","Data":"703d2eb9413661709caa8c6ef3aac10e5c19c14cfb18e345a582b4db1022c1bd"} Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.523333 4832 generic.go:334] "Generic (PLEG): container finished" podID="dd972708-f8d0-4685-b239-213b3247f19e" containerID="df40bca24b59ffe8c6fdd28e1d20e7b38dd1e490517c7cafd53cd998aec483bd" exitCode=0 Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.523427 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd972708-f8d0-4685-b239-213b3247f19e","Type":"ContainerDied","Data":"df40bca24b59ffe8c6fdd28e1d20e7b38dd1e490517c7cafd53cd998aec483bd"} Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.523911 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd972708-f8d0-4685-b239-213b3247f19e","Type":"ContainerDied","Data":"888e837b56ea27bb32bc9bb148f267dabd8071559c781da5c811a3de59d95427"} Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.523927 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="888e837b56ea27bb32bc9bb148f267dabd8071559c781da5c811a3de59d95427" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.547214 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.675753 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv5fz\" (UniqueName: \"kubernetes.io/projected/dd972708-f8d0-4685-b239-213b3247f19e-kube-api-access-sv5fz\") pod \"dd972708-f8d0-4685-b239-213b3247f19e\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.675830 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd972708-f8d0-4685-b239-213b3247f19e-run-httpd\") pod \"dd972708-f8d0-4685-b239-213b3247f19e\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.675902 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd972708-f8d0-4685-b239-213b3247f19e-log-httpd\") pod \"dd972708-f8d0-4685-b239-213b3247f19e\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.676098 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-sg-core-conf-yaml\") pod \"dd972708-f8d0-4685-b239-213b3247f19e\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.676607 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd972708-f8d0-4685-b239-213b3247f19e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dd972708-f8d0-4685-b239-213b3247f19e" (UID: "dd972708-f8d0-4685-b239-213b3247f19e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.676703 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd972708-f8d0-4685-b239-213b3247f19e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dd972708-f8d0-4685-b239-213b3247f19e" (UID: "dd972708-f8d0-4685-b239-213b3247f19e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.676958 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-scripts\") pod \"dd972708-f8d0-4685-b239-213b3247f19e\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.676989 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-combined-ca-bundle\") pod \"dd972708-f8d0-4685-b239-213b3247f19e\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.677073 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-config-data\") pod \"dd972708-f8d0-4685-b239-213b3247f19e\" (UID: \"dd972708-f8d0-4685-b239-213b3247f19e\") " Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.677539 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd972708-f8d0-4685-b239-213b3247f19e-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.677561 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd972708-f8d0-4685-b239-213b3247f19e-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.684612 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-scripts" (OuterVolumeSpecName: "scripts") pod "dd972708-f8d0-4685-b239-213b3247f19e" (UID: "dd972708-f8d0-4685-b239-213b3247f19e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.686197 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd972708-f8d0-4685-b239-213b3247f19e-kube-api-access-sv5fz" (OuterVolumeSpecName: "kube-api-access-sv5fz") pod "dd972708-f8d0-4685-b239-213b3247f19e" (UID: "dd972708-f8d0-4685-b239-213b3247f19e"). InnerVolumeSpecName "kube-api-access-sv5fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.716593 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dd972708-f8d0-4685-b239-213b3247f19e" (UID: "dd972708-f8d0-4685-b239-213b3247f19e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.769085 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd972708-f8d0-4685-b239-213b3247f19e" (UID: "dd972708-f8d0-4685-b239-213b3247f19e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.780017 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv5fz\" (UniqueName: \"kubernetes.io/projected/dd972708-f8d0-4685-b239-213b3247f19e-kube-api-access-sv5fz\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.780057 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.780070 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.780083 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.803793 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-config-data" (OuterVolumeSpecName: "config-data") pod "dd972708-f8d0-4685-b239-213b3247f19e" (UID: "dd972708-f8d0-4685-b239-213b3247f19e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:27 crc kubenswrapper[4832]: I1204 06:30:27.882722 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd972708-f8d0-4685-b239-213b3247f19e-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.534877 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.581954 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.594565 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.624296 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:28 crc kubenswrapper[4832]: E1204 06:30:28.624833 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="proxy-httpd" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.624855 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="proxy-httpd" Dec 04 06:30:28 crc kubenswrapper[4832]: E1204 06:30:28.624885 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="ceilometer-notification-agent" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.624894 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="ceilometer-notification-agent" Dec 04 06:30:28 crc kubenswrapper[4832]: E1204 06:30:28.624918 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="ceilometer-central-agent" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.624926 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="ceilometer-central-agent" Dec 04 06:30:28 crc kubenswrapper[4832]: E1204 06:30:28.624938 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.624944 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon" Dec 04 06:30:28 crc kubenswrapper[4832]: E1204 06:30:28.624960 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon-log" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.624966 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon-log" Dec 04 06:30:28 crc kubenswrapper[4832]: E1204 06:30:28.624979 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="sg-core" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.624986 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="sg-core" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.625174 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon-log" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.625186 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="ceilometer-notification-agent" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.625200 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="proxy-httpd" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.625209 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6361378-b3ff-41c4-a77e-3bb4a1482984" containerName="horizon" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.625224 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="sg-core" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.625236 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd972708-f8d0-4685-b239-213b3247f19e" containerName="ceilometer-central-agent" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.627248 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.629820 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.630446 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.663821 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.723333 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd972708-f8d0-4685-b239-213b3247f19e" path="/var/lib/kubelet/pods/dd972708-f8d0-4685-b239-213b3247f19e/volumes" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.810477 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61591f62-5ac1-4661-9262-ec4a61089a74-log-httpd\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.810556 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.810623 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61591f62-5ac1-4661-9262-ec4a61089a74-run-httpd\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.810746 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-config-data\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.810806 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7zkn\" (UniqueName: \"kubernetes.io/projected/61591f62-5ac1-4661-9262-ec4a61089a74-kube-api-access-r7zkn\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.810845 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.810927 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-scripts\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.913161 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-scripts\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.913537 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61591f62-5ac1-4661-9262-ec4a61089a74-log-httpd\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.913569 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.913612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61591f62-5ac1-4661-9262-ec4a61089a74-run-httpd\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.913703 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-config-data\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.913734 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7zkn\" (UniqueName: \"kubernetes.io/projected/61591f62-5ac1-4661-9262-ec4a61089a74-kube-api-access-r7zkn\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.913760 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.914185 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61591f62-5ac1-4661-9262-ec4a61089a74-log-httpd\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.914319 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61591f62-5ac1-4661-9262-ec4a61089a74-run-httpd\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.920626 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.921964 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-scripts\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.922428 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.935206 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7zkn\" (UniqueName: \"kubernetes.io/projected/61591f62-5ac1-4661-9262-ec4a61089a74-kube-api-access-r7zkn\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.938761 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-config-data\") pod \"ceilometer-0\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " pod="openstack/ceilometer-0" Dec 04 06:30:28 crc kubenswrapper[4832]: I1204 06:30:28.945300 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:30 crc kubenswrapper[4832]: I1204 06:30:30.404792 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:30 crc kubenswrapper[4832]: I1204 06:30:30.556069 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61591f62-5ac1-4661-9262-ec4a61089a74","Type":"ContainerStarted","Data":"28832bd7681826c5db4a9ce0a26f51f588705825f70cc884c62ec3ae23f4b424"} Dec 04 06:30:31 crc kubenswrapper[4832]: I1204 06:30:31.574187 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61591f62-5ac1-4661-9262-ec4a61089a74","Type":"ContainerStarted","Data":"87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743"} Dec 04 06:30:31 crc kubenswrapper[4832]: I1204 06:30:31.577254 4832 generic.go:334] "Generic (PLEG): container finished" podID="bd907035-aa8f-4dd1-bc4d-06eb3fde3b49" containerID="3f0375403c06179d4a3439ef48a3d7f37b4ee4c8b99d700f62f8f41eba064d98" exitCode=0 Dec 04 06:30:31 crc kubenswrapper[4832]: I1204 06:30:31.577319 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rnc6z" event={"ID":"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49","Type":"ContainerDied","Data":"3f0375403c06179d4a3439ef48a3d7f37b4ee4c8b99d700f62f8f41eba064d98"} Dec 04 06:30:32 crc kubenswrapper[4832]: I1204 06:30:32.648798 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61591f62-5ac1-4661-9262-ec4a61089a74","Type":"ContainerStarted","Data":"0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f"} Dec 04 06:30:32 crc kubenswrapper[4832]: I1204 06:30:32.990939 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.124410 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-combined-ca-bundle\") pod \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.124781 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-config-data\") pod \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.125056 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-scripts\") pod \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.125219 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26fnw\" (UniqueName: \"kubernetes.io/projected/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-kube-api-access-26fnw\") pod \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\" (UID: \"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49\") " Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.132951 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-kube-api-access-26fnw" (OuterVolumeSpecName: "kube-api-access-26fnw") pod "bd907035-aa8f-4dd1-bc4d-06eb3fde3b49" (UID: "bd907035-aa8f-4dd1-bc4d-06eb3fde3b49"). InnerVolumeSpecName "kube-api-access-26fnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.140037 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-scripts" (OuterVolumeSpecName: "scripts") pod "bd907035-aa8f-4dd1-bc4d-06eb3fde3b49" (UID: "bd907035-aa8f-4dd1-bc4d-06eb3fde3b49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.163489 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd907035-aa8f-4dd1-bc4d-06eb3fde3b49" (UID: "bd907035-aa8f-4dd1-bc4d-06eb3fde3b49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.167242 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-config-data" (OuterVolumeSpecName: "config-data") pod "bd907035-aa8f-4dd1-bc4d-06eb3fde3b49" (UID: "bd907035-aa8f-4dd1-bc4d-06eb3fde3b49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.228694 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26fnw\" (UniqueName: \"kubernetes.io/projected/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-kube-api-access-26fnw\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.228981 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.229040 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.229130 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.660078 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61591f62-5ac1-4661-9262-ec4a61089a74","Type":"ContainerStarted","Data":"6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354"} Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.661689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rnc6z" event={"ID":"bd907035-aa8f-4dd1-bc4d-06eb3fde3b49","Type":"ContainerDied","Data":"1e49009724018c252866a258bb007db14ba10f5ebf2a9cce709f0965bc6b8817"} Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.661714 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e49009724018c252866a258bb007db14ba10f5ebf2a9cce709f0965bc6b8817" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.661777 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rnc6z" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.731457 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 04 06:30:33 crc kubenswrapper[4832]: E1204 06:30:33.731868 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd907035-aa8f-4dd1-bc4d-06eb3fde3b49" containerName="nova-cell0-conductor-db-sync" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.731884 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd907035-aa8f-4dd1-bc4d-06eb3fde3b49" containerName="nova-cell0-conductor-db-sync" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.732057 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd907035-aa8f-4dd1-bc4d-06eb3fde3b49" containerName="nova-cell0-conductor-db-sync" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.732730 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.734872 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.735956 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ckncc" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.741969 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.839121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d2f2b1-c068-4912-9c17-adb96b1d9233-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"83d2f2b1-c068-4912-9c17-adb96b1d9233\") " pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.839613 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjn59\" (UniqueName: \"kubernetes.io/projected/83d2f2b1-c068-4912-9c17-adb96b1d9233-kube-api-access-wjn59\") pod \"nova-cell0-conductor-0\" (UID: \"83d2f2b1-c068-4912-9c17-adb96b1d9233\") " pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.839691 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d2f2b1-c068-4912-9c17-adb96b1d9233-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"83d2f2b1-c068-4912-9c17-adb96b1d9233\") " pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.941403 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d2f2b1-c068-4912-9c17-adb96b1d9233-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"83d2f2b1-c068-4912-9c17-adb96b1d9233\") " pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.941463 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjn59\" (UniqueName: \"kubernetes.io/projected/83d2f2b1-c068-4912-9c17-adb96b1d9233-kube-api-access-wjn59\") pod \"nova-cell0-conductor-0\" (UID: \"83d2f2b1-c068-4912-9c17-adb96b1d9233\") " pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.941499 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d2f2b1-c068-4912-9c17-adb96b1d9233-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"83d2f2b1-c068-4912-9c17-adb96b1d9233\") " pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.945249 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d2f2b1-c068-4912-9c17-adb96b1d9233-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"83d2f2b1-c068-4912-9c17-adb96b1d9233\") " pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.947003 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d2f2b1-c068-4912-9c17-adb96b1d9233-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"83d2f2b1-c068-4912-9c17-adb96b1d9233\") " pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:33 crc kubenswrapper[4832]: I1204 06:30:33.966885 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjn59\" (UniqueName: \"kubernetes.io/projected/83d2f2b1-c068-4912-9c17-adb96b1d9233-kube-api-access-wjn59\") pod \"nova-cell0-conductor-0\" (UID: \"83d2f2b1-c068-4912-9c17-adb96b1d9233\") " pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:34 crc kubenswrapper[4832]: I1204 06:30:34.059882 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:34 crc kubenswrapper[4832]: I1204 06:30:34.521566 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 04 06:30:34 crc kubenswrapper[4832]: I1204 06:30:34.676609 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61591f62-5ac1-4661-9262-ec4a61089a74","Type":"ContainerStarted","Data":"bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6"} Dec 04 06:30:34 crc kubenswrapper[4832]: I1204 06:30:34.676748 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 06:30:34 crc kubenswrapper[4832]: I1204 06:30:34.678509 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"83d2f2b1-c068-4912-9c17-adb96b1d9233","Type":"ContainerStarted","Data":"e22032f7699ef9968b8de8598ae7bf1bc388b3572256aa9ed635c4494c8bcd26"} Dec 04 06:30:34 crc kubenswrapper[4832]: I1204 06:30:34.705219 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.192390784 podStartE2EDuration="6.705197183s" podCreationTimestamp="2025-12-04 06:30:28 +0000 UTC" firstStartedPulling="2025-12-04 06:30:30.411786071 +0000 UTC m=+1286.024603777" lastFinishedPulling="2025-12-04 06:30:33.92459247 +0000 UTC m=+1289.537410176" observedRunningTime="2025-12-04 06:30:34.698854145 +0000 UTC m=+1290.311671851" watchObservedRunningTime="2025-12-04 06:30:34.705197183 +0000 UTC m=+1290.318014889" Dec 04 06:30:34 crc kubenswrapper[4832]: I1204 06:30:34.837986 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:35 crc kubenswrapper[4832]: I1204 06:30:35.362977 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:30:35 crc kubenswrapper[4832]: I1204 06:30:35.363383 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:30:35 crc kubenswrapper[4832]: I1204 06:30:35.363587 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:30:35 crc kubenswrapper[4832]: I1204 06:30:35.364553 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"348e974629646d54fa0c54ee820cfb4880e34f9731c9b205d4b99c38588e1db7"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 06:30:35 crc kubenswrapper[4832]: I1204 06:30:35.364681 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://348e974629646d54fa0c54ee820cfb4880e34f9731c9b205d4b99c38588e1db7" gracePeriod=600 Dec 04 06:30:35 crc kubenswrapper[4832]: I1204 06:30:35.693643 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"83d2f2b1-c068-4912-9c17-adb96b1d9233","Type":"ContainerStarted","Data":"7f688ef3e9e4e2c05efe863db8cc8298d996a348194cea762b460a3f8d65c506"} Dec 04 06:30:35 crc kubenswrapper[4832]: I1204 06:30:35.694182 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:35 crc kubenswrapper[4832]: I1204 06:30:35.697943 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="348e974629646d54fa0c54ee820cfb4880e34f9731c9b205d4b99c38588e1db7" exitCode=0 Dec 04 06:30:35 crc kubenswrapper[4832]: I1204 06:30:35.698082 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"348e974629646d54fa0c54ee820cfb4880e34f9731c9b205d4b99c38588e1db7"} Dec 04 06:30:35 crc kubenswrapper[4832]: I1204 06:30:35.698121 4832 scope.go:117] "RemoveContainer" containerID="f9320b2ca718b6f93f88166b331265dcdcf00ab62d4761ea9c83e290c8013b61" Dec 04 06:30:35 crc kubenswrapper[4832]: I1204 06:30:35.729164 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.729135359 podStartE2EDuration="2.729135359s" podCreationTimestamp="2025-12-04 06:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:30:35.718800423 +0000 UTC m=+1291.331618129" watchObservedRunningTime="2025-12-04 06:30:35.729135359 +0000 UTC m=+1291.341953075" Dec 04 06:30:36 crc kubenswrapper[4832]: I1204 06:30:36.708984 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5"} Dec 04 06:30:36 crc kubenswrapper[4832]: I1204 06:30:36.709186 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="ceilometer-central-agent" containerID="cri-o://87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743" gracePeriod=30 Dec 04 06:30:36 crc kubenswrapper[4832]: I1204 06:30:36.709238 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="sg-core" containerID="cri-o://6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354" gracePeriod=30 Dec 04 06:30:36 crc kubenswrapper[4832]: I1204 06:30:36.709267 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="ceilometer-notification-agent" containerID="cri-o://0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f" gracePeriod=30 Dec 04 06:30:36 crc kubenswrapper[4832]: I1204 06:30:36.710456 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="proxy-httpd" containerID="cri-o://bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6" gracePeriod=30 Dec 04 06:30:37 crc kubenswrapper[4832]: I1204 06:30:37.723825 4832 generic.go:334] "Generic (PLEG): container finished" podID="61591f62-5ac1-4661-9262-ec4a61089a74" containerID="bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6" exitCode=0 Dec 04 06:30:37 crc kubenswrapper[4832]: I1204 06:30:37.724286 4832 generic.go:334] "Generic (PLEG): container finished" podID="61591f62-5ac1-4661-9262-ec4a61089a74" containerID="6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354" exitCode=2 Dec 04 06:30:37 crc kubenswrapper[4832]: I1204 06:30:37.724300 4832 generic.go:334] "Generic (PLEG): container finished" podID="61591f62-5ac1-4661-9262-ec4a61089a74" containerID="0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f" exitCode=0 Dec 04 06:30:37 crc kubenswrapper[4832]: I1204 06:30:37.723918 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61591f62-5ac1-4661-9262-ec4a61089a74","Type":"ContainerDied","Data":"bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6"} Dec 04 06:30:37 crc kubenswrapper[4832]: I1204 06:30:37.724442 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61591f62-5ac1-4661-9262-ec4a61089a74","Type":"ContainerDied","Data":"6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354"} Dec 04 06:30:37 crc kubenswrapper[4832]: I1204 06:30:37.724458 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61591f62-5ac1-4661-9262-ec4a61089a74","Type":"ContainerDied","Data":"0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f"} Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.096721 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.435959 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.586788 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61591f62-5ac1-4661-9262-ec4a61089a74-log-httpd\") pod \"61591f62-5ac1-4661-9262-ec4a61089a74\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.586873 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7zkn\" (UniqueName: \"kubernetes.io/projected/61591f62-5ac1-4661-9262-ec4a61089a74-kube-api-access-r7zkn\") pod \"61591f62-5ac1-4661-9262-ec4a61089a74\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.586902 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-config-data\") pod \"61591f62-5ac1-4661-9262-ec4a61089a74\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.586956 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-sg-core-conf-yaml\") pod \"61591f62-5ac1-4661-9262-ec4a61089a74\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.587003 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-combined-ca-bundle\") pod \"61591f62-5ac1-4661-9262-ec4a61089a74\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.587109 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-scripts\") pod \"61591f62-5ac1-4661-9262-ec4a61089a74\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.587171 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61591f62-5ac1-4661-9262-ec4a61089a74-run-httpd\") pod \"61591f62-5ac1-4661-9262-ec4a61089a74\" (UID: \"61591f62-5ac1-4661-9262-ec4a61089a74\") " Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.587518 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61591f62-5ac1-4661-9262-ec4a61089a74-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "61591f62-5ac1-4661-9262-ec4a61089a74" (UID: "61591f62-5ac1-4661-9262-ec4a61089a74"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.588688 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61591f62-5ac1-4661-9262-ec4a61089a74-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.588845 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61591f62-5ac1-4661-9262-ec4a61089a74-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "61591f62-5ac1-4661-9262-ec4a61089a74" (UID: "61591f62-5ac1-4661-9262-ec4a61089a74"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.595558 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61591f62-5ac1-4661-9262-ec4a61089a74-kube-api-access-r7zkn" (OuterVolumeSpecName: "kube-api-access-r7zkn") pod "61591f62-5ac1-4661-9262-ec4a61089a74" (UID: "61591f62-5ac1-4661-9262-ec4a61089a74"). InnerVolumeSpecName "kube-api-access-r7zkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.595599 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-scripts" (OuterVolumeSpecName: "scripts") pod "61591f62-5ac1-4661-9262-ec4a61089a74" (UID: "61591f62-5ac1-4661-9262-ec4a61089a74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.629816 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "61591f62-5ac1-4661-9262-ec4a61089a74" (UID: "61591f62-5ac1-4661-9262-ec4a61089a74"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.650123 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-64xph"] Dec 04 06:30:39 crc kubenswrapper[4832]: E1204 06:30:39.650685 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="ceilometer-central-agent" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.650711 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="ceilometer-central-agent" Dec 04 06:30:39 crc kubenswrapper[4832]: E1204 06:30:39.650735 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="ceilometer-notification-agent" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.650745 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="ceilometer-notification-agent" Dec 04 06:30:39 crc kubenswrapper[4832]: E1204 06:30:39.650773 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="proxy-httpd" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.650782 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="proxy-httpd" Dec 04 06:30:39 crc kubenswrapper[4832]: E1204 06:30:39.650803 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="sg-core" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.650809 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="sg-core" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.651017 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="sg-core" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.651043 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="proxy-httpd" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.651053 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="ceilometer-notification-agent" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.651062 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" containerName="ceilometer-central-agent" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.652020 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.655930 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.656452 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.667530 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-64xph"] Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.693558 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7zkn\" (UniqueName: \"kubernetes.io/projected/61591f62-5ac1-4661-9262-ec4a61089a74-kube-api-access-r7zkn\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.699114 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.699166 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.699180 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61591f62-5ac1-4661-9262-ec4a61089a74-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.780012 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61591f62-5ac1-4661-9262-ec4a61089a74" (UID: "61591f62-5ac1-4661-9262-ec4a61089a74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.784789 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-config-data" (OuterVolumeSpecName: "config-data") pod "61591f62-5ac1-4661-9262-ec4a61089a74" (UID: "61591f62-5ac1-4661-9262-ec4a61089a74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.802905 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-scripts\") pod \"nova-cell0-cell-mapping-64xph\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.802958 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24b6k\" (UniqueName: \"kubernetes.io/projected/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-kube-api-access-24b6k\") pod \"nova-cell0-cell-mapping-64xph\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.802986 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-config-data\") pod \"nova-cell0-cell-mapping-64xph\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.803039 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-64xph\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.803174 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.803186 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61591f62-5ac1-4661-9262-ec4a61089a74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.818015 4832 generic.go:334] "Generic (PLEG): container finished" podID="61591f62-5ac1-4661-9262-ec4a61089a74" containerID="87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743" exitCode=0 Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.818068 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61591f62-5ac1-4661-9262-ec4a61089a74","Type":"ContainerDied","Data":"87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743"} Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.818100 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61591f62-5ac1-4661-9262-ec4a61089a74","Type":"ContainerDied","Data":"28832bd7681826c5db4a9ce0a26f51f588705825f70cc884c62ec3ae23f4b424"} Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.818120 4832 scope.go:117] "RemoveContainer" containerID="bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.818339 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.885214 4832 scope.go:117] "RemoveContainer" containerID="6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.890366 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.893194 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.901308 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.904834 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24b6k\" (UniqueName: \"kubernetes.io/projected/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-kube-api-access-24b6k\") pod \"nova-cell0-cell-mapping-64xph\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.904881 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-config-data\") pod \"nova-cell0-cell-mapping-64xph\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.904987 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-64xph\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.905155 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-scripts\") pod \"nova-cell0-cell-mapping-64xph\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.930112 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-config-data\") pod \"nova-cell0-cell-mapping-64xph\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.947212 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-64xph\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.967275 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24b6k\" (UniqueName: \"kubernetes.io/projected/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-kube-api-access-24b6k\") pod \"nova-cell0-cell-mapping-64xph\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.969764 4832 scope.go:117] "RemoveContainer" containerID="0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.970375 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-scripts\") pod \"nova-cell0-cell-mapping-64xph\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:39 crc kubenswrapper[4832]: I1204 06:30:39.981351 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.010018 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d7586a-ce09-409b-ac0b-a310bf90dec0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.010315 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c729f\" (UniqueName: \"kubernetes.io/projected/c1d7586a-ce09-409b-ac0b-a310bf90dec0-kube-api-access-c729f\") pod \"nova-cell1-novncproxy-0\" (UID: \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.010439 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1d7586a-ce09-409b-ac0b-a310bf90dec0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.046323 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.048722 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.057606 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.064895 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.069630 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.091650 4832 scope.go:117] "RemoveContainer" containerID="87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.128612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1d7586a-ce09-409b-ac0b-a310bf90dec0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.128790 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d7586a-ce09-409b-ac0b-a310bf90dec0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.128833 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c729f\" (UniqueName: \"kubernetes.io/projected/c1d7586a-ce09-409b-ac0b-a310bf90dec0-kube-api-access-c729f\") pod \"nova-cell1-novncproxy-0\" (UID: \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.145239 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1d7586a-ce09-409b-ac0b-a310bf90dec0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.154555 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.169735 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c729f\" (UniqueName: \"kubernetes.io/projected/c1d7586a-ce09-409b-ac0b-a310bf90dec0-kube-api-access-c729f\") pod \"nova-cell1-novncproxy-0\" (UID: \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.182618 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.187146 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d7586a-ce09-409b-ac0b-a310bf90dec0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.202361 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.204846 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.210271 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.210635 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.217548 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.218902 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.223178 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.230815 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c2e869-61be-409d-ab69-60b0d6e87ca6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.231108 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c2e869-61be-409d-ab69-60b0d6e87ca6-config-data\") pod \"nova-metadata-0\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.231244 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c2e869-61be-409d-ab69-60b0d6e87ca6-logs\") pod \"nova-metadata-0\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.231359 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgp58\" (UniqueName: \"kubernetes.io/projected/d9c2e869-61be-409d-ab69-60b0d6e87ca6-kube-api-access-dgp58\") pod \"nova-metadata-0\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.237182 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.272808 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.282097 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.299463 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.301154 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.304719 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.316120 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.333411 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.333665 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-scripts\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.333786 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g6lv\" (UniqueName: \"kubernetes.io/projected/ce6686a3-5933-4fb9-920b-29d135f7e46f-kube-api-access-5g6lv\") pod \"nova-scheduler-0\" (UID: \"ce6686a3-5933-4fb9-920b-29d135f7e46f\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.333870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c2e869-61be-409d-ab69-60b0d6e87ca6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.333962 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c2e869-61be-409d-ab69-60b0d6e87ca6-config-data\") pod \"nova-metadata-0\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.334040 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce6686a3-5933-4fb9-920b-29d135f7e46f-config-data\") pod \"nova-scheduler-0\" (UID: \"ce6686a3-5933-4fb9-920b-29d135f7e46f\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.334121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1db388-a224-4e6f-b54d-de7b0321a518-log-httpd\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.334201 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-config-data\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.334313 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c2e869-61be-409d-ab69-60b0d6e87ca6-logs\") pod \"nova-metadata-0\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.334415 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgp58\" (UniqueName: \"kubernetes.io/projected/d9c2e869-61be-409d-ab69-60b0d6e87ca6-kube-api-access-dgp58\") pod \"nova-metadata-0\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.334491 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6686a3-5933-4fb9-920b-29d135f7e46f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce6686a3-5933-4fb9-920b-29d135f7e46f\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.334566 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.334662 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1db388-a224-4e6f-b54d-de7b0321a518-run-httpd\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.334738 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5gp8\" (UniqueName: \"kubernetes.io/projected/3a1db388-a224-4e6f-b54d-de7b0321a518-kube-api-access-x5gp8\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.340377 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c2e869-61be-409d-ab69-60b0d6e87ca6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.350756 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c2e869-61be-409d-ab69-60b0d6e87ca6-logs\") pod \"nova-metadata-0\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.357064 4832 scope.go:117] "RemoveContainer" containerID="bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6" Dec 04 06:30:40 crc kubenswrapper[4832]: E1204 06:30:40.357490 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6\": container with ID starting with bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6 not found: ID does not exist" containerID="bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.357524 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6"} err="failed to get container status \"bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6\": rpc error: code = NotFound desc = could not find container \"bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6\": container with ID starting with bfc601301536e4be6e6e5f4142ee38fee56f27242b0f0d35febeebdd7d3ceaa6 not found: ID does not exist" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.357547 4832 scope.go:117] "RemoveContainer" containerID="6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354" Dec 04 06:30:40 crc kubenswrapper[4832]: E1204 06:30:40.357786 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354\": container with ID starting with 6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354 not found: ID does not exist" containerID="6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.357802 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354"} err="failed to get container status \"6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354\": rpc error: code = NotFound desc = could not find container \"6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354\": container with ID starting with 6ebaae8fab2939849d835d59da7ec49c08456a29b37e4d3b61eb14d40dafb354 not found: ID does not exist" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.357815 4832 scope.go:117] "RemoveContainer" containerID="0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f" Dec 04 06:30:40 crc kubenswrapper[4832]: E1204 06:30:40.358720 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f\": container with ID starting with 0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f not found: ID does not exist" containerID="0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.358761 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f"} err="failed to get container status \"0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f\": rpc error: code = NotFound desc = could not find container \"0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f\": container with ID starting with 0ec94a849be746f1a5ec5c7b3b153fb75da18de5667bfd2199d2702a4dda0d4f not found: ID does not exist" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.358789 4832 scope.go:117] "RemoveContainer" containerID="87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743" Dec 04 06:30:40 crc kubenswrapper[4832]: E1204 06:30:40.359239 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743\": container with ID starting with 87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743 not found: ID does not exist" containerID="87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.359290 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743"} err="failed to get container status \"87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743\": rpc error: code = NotFound desc = could not find container \"87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743\": container with ID starting with 87d7d26f39abd4fdd2a0788ba0ddac9b4c869e8ba0a73a500932e66d65548743 not found: ID does not exist" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.371368 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-c9ptl"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.372564 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgp58\" (UniqueName: \"kubernetes.io/projected/d9c2e869-61be-409d-ab69-60b0d6e87ca6-kube-api-access-dgp58\") pod \"nova-metadata-0\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.372177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c2e869-61be-409d-ab69-60b0d6e87ca6-config-data\") pod \"nova-metadata-0\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.375045 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.388994 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-c9ptl"] Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.436818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.436882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-scripts\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.436909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g6lv\" (UniqueName: \"kubernetes.io/projected/ce6686a3-5933-4fb9-920b-29d135f7e46f-kube-api-access-5g6lv\") pod \"nova-scheduler-0\" (UID: \"ce6686a3-5933-4fb9-920b-29d135f7e46f\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.436930 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-logs\") pod \"nova-api-0\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.436989 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-config-data\") pod \"nova-api-0\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.437011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce6686a3-5933-4fb9-920b-29d135f7e46f-config-data\") pod \"nova-scheduler-0\" (UID: \"ce6686a3-5933-4fb9-920b-29d135f7e46f\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.437044 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1db388-a224-4e6f-b54d-de7b0321a518-log-httpd\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.437072 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-config-data\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.437146 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6686a3-5933-4fb9-920b-29d135f7e46f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce6686a3-5933-4fb9-920b-29d135f7e46f\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.437170 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.437221 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1db388-a224-4e6f-b54d-de7b0321a518-run-httpd\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.437243 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5gp8\" (UniqueName: \"kubernetes.io/projected/3a1db388-a224-4e6f-b54d-de7b0321a518-kube-api-access-x5gp8\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.437280 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.437304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnrxr\" (UniqueName: \"kubernetes.io/projected/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-kube-api-access-bnrxr\") pod \"nova-api-0\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.441029 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1db388-a224-4e6f-b54d-de7b0321a518-log-httpd\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.441210 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1db388-a224-4e6f-b54d-de7b0321a518-run-httpd\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.449683 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6686a3-5933-4fb9-920b-29d135f7e46f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ce6686a3-5933-4fb9-920b-29d135f7e46f\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.450071 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-config-data\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.450355 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.450606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.451106 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce6686a3-5933-4fb9-920b-29d135f7e46f-config-data\") pod \"nova-scheduler-0\" (UID: \"ce6686a3-5933-4fb9-920b-29d135f7e46f\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.454254 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-scripts\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.462123 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g6lv\" (UniqueName: \"kubernetes.io/projected/ce6686a3-5933-4fb9-920b-29d135f7e46f-kube-api-access-5g6lv\") pod \"nova-scheduler-0\" (UID: \"ce6686a3-5933-4fb9-920b-29d135f7e46f\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.464786 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5gp8\" (UniqueName: \"kubernetes.io/projected/3a1db388-a224-4e6f-b54d-de7b0321a518-kube-api-access-x5gp8\") pod \"ceilometer-0\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.543271 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.543339 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnrxr\" (UniqueName: \"kubernetes.io/projected/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-kube-api-access-bnrxr\") pod \"nova-api-0\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.543409 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-config\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.543453 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.543477 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgf6c\" (UniqueName: \"kubernetes.io/projected/a3ab00a2-637c-483b-a649-a7b692b54668-kube-api-access-xgf6c\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.543501 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-logs\") pod \"nova-api-0\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.543533 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-config-data\") pod \"nova-api-0\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.543570 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.543599 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.543634 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-dns-svc\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.548648 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.548957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-logs\") pod \"nova-api-0\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.559445 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-config-data\") pod \"nova-api-0\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.581974 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnrxr\" (UniqueName: \"kubernetes.io/projected/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-kube-api-access-bnrxr\") pod \"nova-api-0\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.607580 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-64xph"] Dec 04 06:30:40 crc kubenswrapper[4832]: W1204 06:30:40.619590 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cdd38c2_1620_41ab_bb2e_7a82a7a0858e.slice/crio-21ec32530958cb13051d2355bc89e3ac1c545efbe782b0dd60ad39a8b178d4dd WatchSource:0}: Error finding container 21ec32530958cb13051d2355bc89e3ac1c545efbe782b0dd60ad39a8b178d4dd: Status 404 returned error can't find the container with id 21ec32530958cb13051d2355bc89e3ac1c545efbe782b0dd60ad39a8b178d4dd Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.633630 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.645603 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.645667 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.645717 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-dns-svc\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.645844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-config\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.645901 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.645927 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgf6c\" (UniqueName: \"kubernetes.io/projected/a3ab00a2-637c-483b-a649-a7b692b54668-kube-api-access-xgf6c\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.647546 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.648740 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.649203 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.658516 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-config\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.658699 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-dns-svc\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.682812 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgf6c\" (UniqueName: \"kubernetes.io/projected/a3ab00a2-637c-483b-a649-a7b692b54668-kube-api-access-xgf6c\") pod \"dnsmasq-dns-757b4f8459-c9ptl\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.698660 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.727452 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.731303 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61591f62-5ac1-4661-9262-ec4a61089a74" path="/var/lib/kubelet/pods/61591f62-5ac1-4661-9262-ec4a61089a74/volumes" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.737845 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.760181 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.845816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-64xph" event={"ID":"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e","Type":"ContainerStarted","Data":"21ec32530958cb13051d2355bc89e3ac1c545efbe782b0dd60ad39a8b178d4dd"} Dec 04 06:30:40 crc kubenswrapper[4832]: I1204 06:30:40.977331 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.007888 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9929d"] Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.015725 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.019325 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.019629 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 04 06:30:41 crc kubenswrapper[4832]: W1204 06:30:41.042061 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1d7586a_ce09_409b_ac0b_a310bf90dec0.slice/crio-ff0496e577bd656461baa10b991087f537502e5ee800f0f60042d8831413ca97 WatchSource:0}: Error finding container ff0496e577bd656461baa10b991087f537502e5ee800f0f60042d8831413ca97: Status 404 returned error can't find the container with id ff0496e577bd656461baa10b991087f537502e5ee800f0f60042d8831413ca97 Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.047075 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9929d"] Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.162784 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6nzp\" (UniqueName: \"kubernetes.io/projected/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-kube-api-access-t6nzp\") pod \"nova-cell1-conductor-db-sync-9929d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.163139 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-config-data\") pod \"nova-cell1-conductor-db-sync-9929d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.163206 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9929d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.163248 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-scripts\") pod \"nova-cell1-conductor-db-sync-9929d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.242445 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.269195 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6nzp\" (UniqueName: \"kubernetes.io/projected/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-kube-api-access-t6nzp\") pod \"nova-cell1-conductor-db-sync-9929d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.269778 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-config-data\") pod \"nova-cell1-conductor-db-sync-9929d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.269853 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9929d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.269897 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-scripts\") pod \"nova-cell1-conductor-db-sync-9929d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: W1204 06:30:41.277223 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9c2e869_61be_409d_ab69_60b0d6e87ca6.slice/crio-392849d306434753d894c58b2e8c9360bcaee78f90feaad295476107b0c68061 WatchSource:0}: Error finding container 392849d306434753d894c58b2e8c9360bcaee78f90feaad295476107b0c68061: Status 404 returned error can't find the container with id 392849d306434753d894c58b2e8c9360bcaee78f90feaad295476107b0c68061 Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.292634 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-config-data\") pod \"nova-cell1-conductor-db-sync-9929d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.295159 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9929d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.301321 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-scripts\") pod \"nova-cell1-conductor-db-sync-9929d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.310040 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6nzp\" (UniqueName: \"kubernetes.io/projected/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-kube-api-access-t6nzp\") pod \"nova-cell1-conductor-db-sync-9929d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.351030 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.466726 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.689494 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:30:41 crc kubenswrapper[4832]: W1204 06:30:41.699123 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ab00a2_637c_483b_a649_a7b692b54668.slice/crio-5362db84cef4bd024f2ad79e075a7a43af5694dcc38fadb33de6d65e0206e6da WatchSource:0}: Error finding container 5362db84cef4bd024f2ad79e075a7a43af5694dcc38fadb33de6d65e0206e6da: Status 404 returned error can't find the container with id 5362db84cef4bd024f2ad79e075a7a43af5694dcc38fadb33de6d65e0206e6da Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.706695 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-c9ptl"] Dec 04 06:30:41 crc kubenswrapper[4832]: W1204 06:30:41.706880 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdc03dbf_34b3_42a5_bb18_265f0faa8e30.slice/crio-d0569a7434d85bd0ca2a5331899f66c5bb8b86828bd30f92fd602cbc0c4b6b7d WatchSource:0}: Error finding container d0569a7434d85bd0ca2a5331899f66c5bb8b86828bd30f92fd602cbc0c4b6b7d: Status 404 returned error can't find the container with id d0569a7434d85bd0ca2a5331899f66c5bb8b86828bd30f92fd602cbc0c4b6b7d Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.718405 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.868101 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cdc03dbf-34b3-42a5-bb18-265f0faa8e30","Type":"ContainerStarted","Data":"d0569a7434d85bd0ca2a5331899f66c5bb8b86828bd30f92fd602cbc0c4b6b7d"} Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.873843 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce6686a3-5933-4fb9-920b-29d135f7e46f","Type":"ContainerStarted","Data":"ac8db24e0d6e70d499fdf5bff4237b62bed552f914f1e8e8f8fd3832f6a8d349"} Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.876685 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" event={"ID":"a3ab00a2-637c-483b-a649-a7b692b54668","Type":"ContainerStarted","Data":"5362db84cef4bd024f2ad79e075a7a43af5694dcc38fadb33de6d65e0206e6da"} Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.882082 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c1d7586a-ce09-409b-ac0b-a310bf90dec0","Type":"ContainerStarted","Data":"ff0496e577bd656461baa10b991087f537502e5ee800f0f60042d8831413ca97"} Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.886703 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-64xph" event={"ID":"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e","Type":"ContainerStarted","Data":"a612f43ec383e04b6d664e1d71b42aeb185003c310a63c9f24ff85edc33ddd69"} Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.890574 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9c2e869-61be-409d-ab69-60b0d6e87ca6","Type":"ContainerStarted","Data":"392849d306434753d894c58b2e8c9360bcaee78f90feaad295476107b0c68061"} Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.893125 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a1db388-a224-4e6f-b54d-de7b0321a518","Type":"ContainerStarted","Data":"c4a05a384519deb94a0a924bdcda3a02e2ee7e1c74aafcafc35d8c8e16696064"} Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.940754 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-64xph" podStartSLOduration=2.940736288 podStartE2EDuration="2.940736288s" podCreationTimestamp="2025-12-04 06:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:30:41.912217461 +0000 UTC m=+1297.525035167" watchObservedRunningTime="2025-12-04 06:30:41.940736288 +0000 UTC m=+1297.553553984" Dec 04 06:30:41 crc kubenswrapper[4832]: I1204 06:30:41.949051 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9929d"] Dec 04 06:30:41 crc kubenswrapper[4832]: W1204 06:30:41.975272 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod933ec7f2_4591_4b3c_b681_d97d7ef7d41d.slice/crio-e346e880ef49ca6327bbbd08b6d9082686ff31794737307bb47f5355f44d483f WatchSource:0}: Error finding container e346e880ef49ca6327bbbd08b6d9082686ff31794737307bb47f5355f44d483f: Status 404 returned error can't find the container with id e346e880ef49ca6327bbbd08b6d9082686ff31794737307bb47f5355f44d483f Dec 04 06:30:42 crc kubenswrapper[4832]: I1204 06:30:42.928891 4832 generic.go:334] "Generic (PLEG): container finished" podID="a3ab00a2-637c-483b-a649-a7b692b54668" containerID="cb79c867a37393e6ba45915b2d336c72900bb45e94b0438311b411ba42c1eb69" exitCode=0 Dec 04 06:30:42 crc kubenswrapper[4832]: I1204 06:30:42.929283 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" event={"ID":"a3ab00a2-637c-483b-a649-a7b692b54668","Type":"ContainerDied","Data":"cb79c867a37393e6ba45915b2d336c72900bb45e94b0438311b411ba42c1eb69"} Dec 04 06:30:42 crc kubenswrapper[4832]: I1204 06:30:42.940524 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a1db388-a224-4e6f-b54d-de7b0321a518","Type":"ContainerStarted","Data":"970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e"} Dec 04 06:30:42 crc kubenswrapper[4832]: I1204 06:30:42.951945 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9929d" event={"ID":"933ec7f2-4591-4b3c-b681-d97d7ef7d41d","Type":"ContainerStarted","Data":"e553fd559fcf2eb2a0c3dc8cb7bab8282ab44e3610cdf75f21922629a3127a45"} Dec 04 06:30:42 crc kubenswrapper[4832]: I1204 06:30:42.951979 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9929d" event={"ID":"933ec7f2-4591-4b3c-b681-d97d7ef7d41d","Type":"ContainerStarted","Data":"e346e880ef49ca6327bbbd08b6d9082686ff31794737307bb47f5355f44d483f"} Dec 04 06:30:44 crc kubenswrapper[4832]: I1204 06:30:44.181646 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9929d" podStartSLOduration=4.18162734 podStartE2EDuration="4.18162734s" podCreationTimestamp="2025-12-04 06:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:30:42.981339129 +0000 UTC m=+1298.594156835" watchObservedRunningTime="2025-12-04 06:30:44.18162734 +0000 UTC m=+1299.794445046" Dec 04 06:30:44 crc kubenswrapper[4832]: I1204 06:30:44.191197 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:30:44 crc kubenswrapper[4832]: I1204 06:30:44.201271 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 06:30:45 crc kubenswrapper[4832]: I1204 06:30:45.992457 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a1db388-a224-4e6f-b54d-de7b0321a518","Type":"ContainerStarted","Data":"f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c"} Dec 04 06:30:45 crc kubenswrapper[4832]: I1204 06:30:45.996134 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cdc03dbf-34b3-42a5-bb18-265f0faa8e30","Type":"ContainerStarted","Data":"7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4"} Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.008374 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce6686a3-5933-4fb9-920b-29d135f7e46f","Type":"ContainerStarted","Data":"dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949"} Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.023333 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.534476073 podStartE2EDuration="6.023311219s" podCreationTimestamp="2025-12-04 06:30:40 +0000 UTC" firstStartedPulling="2025-12-04 06:30:41.723157431 +0000 UTC m=+1297.335975137" lastFinishedPulling="2025-12-04 06:30:45.211992576 +0000 UTC m=+1300.824810283" observedRunningTime="2025-12-04 06:30:46.01888969 +0000 UTC m=+1301.631707396" watchObservedRunningTime="2025-12-04 06:30:46.023311219 +0000 UTC m=+1301.636128925" Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.028840 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" event={"ID":"a3ab00a2-637c-483b-a649-a7b692b54668","Type":"ContainerStarted","Data":"f88ab60c0d3357807f4ca3054dc6bd5c1e2a542cb22a29df519a0b2e1c052acb"} Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.029773 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.046240 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c1d7586a-ce09-409b-ac0b-a310bf90dec0","Type":"ContainerStarted","Data":"f5ed1980a10505655ea7a7f8be6421bfa12bb63204a5aa5edfa3c52301f87744"} Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.046318 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c1d7586a-ce09-409b-ac0b-a310bf90dec0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f5ed1980a10505655ea7a7f8be6421bfa12bb63204a5aa5edfa3c52301f87744" gracePeriod=30 Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.052718 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.517658827 podStartE2EDuration="6.052694698s" podCreationTimestamp="2025-12-04 06:30:40 +0000 UTC" firstStartedPulling="2025-12-04 06:30:41.676930735 +0000 UTC m=+1297.289748441" lastFinishedPulling="2025-12-04 06:30:45.211966606 +0000 UTC m=+1300.824784312" observedRunningTime="2025-12-04 06:30:46.045003998 +0000 UTC m=+1301.657821704" watchObservedRunningTime="2025-12-04 06:30:46.052694698 +0000 UTC m=+1301.665512404" Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.061188 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9c2e869-61be-409d-ab69-60b0d6e87ca6","Type":"ContainerStarted","Data":"9ad911dad2eb20bd1b743587735a7b80b02628c76de130a635cd134053d94ad5"} Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.061340 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9c2e869-61be-409d-ab69-60b0d6e87ca6" containerName="nova-metadata-log" containerID="cri-o://9ad911dad2eb20bd1b743587735a7b80b02628c76de130a635cd134053d94ad5" gracePeriod=30 Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.061740 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9c2e869-61be-409d-ab69-60b0d6e87ca6" containerName="nova-metadata-metadata" containerID="cri-o://4dee36ac56fab7c4baf0317490acd32b0d7e3dd45acd0b015a16068dfd704e62" gracePeriod=30 Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.083293 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" podStartSLOduration=6.083265647 podStartE2EDuration="6.083265647s" podCreationTimestamp="2025-12-04 06:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:30:46.067975377 +0000 UTC m=+1301.680793083" watchObservedRunningTime="2025-12-04 06:30:46.083265647 +0000 UTC m=+1301.696083363" Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.096485 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.937988561 podStartE2EDuration="7.096466165s" podCreationTimestamp="2025-12-04 06:30:39 +0000 UTC" firstStartedPulling="2025-12-04 06:30:41.055071661 +0000 UTC m=+1296.667889367" lastFinishedPulling="2025-12-04 06:30:45.213549265 +0000 UTC m=+1300.826366971" observedRunningTime="2025-12-04 06:30:46.090356782 +0000 UTC m=+1301.703174488" watchObservedRunningTime="2025-12-04 06:30:46.096466165 +0000 UTC m=+1301.709283871" Dec 04 06:30:46 crc kubenswrapper[4832]: I1204 06:30:46.122805 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.202656655 podStartE2EDuration="7.122783347s" podCreationTimestamp="2025-12-04 06:30:39 +0000 UTC" firstStartedPulling="2025-12-04 06:30:41.29210575 +0000 UTC m=+1296.904923446" lastFinishedPulling="2025-12-04 06:30:45.212232432 +0000 UTC m=+1300.825050138" observedRunningTime="2025-12-04 06:30:46.108985645 +0000 UTC m=+1301.721803351" watchObservedRunningTime="2025-12-04 06:30:46.122783347 +0000 UTC m=+1301.735601053" Dec 04 06:30:47 crc kubenswrapper[4832]: I1204 06:30:47.076933 4832 generic.go:334] "Generic (PLEG): container finished" podID="d9c2e869-61be-409d-ab69-60b0d6e87ca6" containerID="9ad911dad2eb20bd1b743587735a7b80b02628c76de130a635cd134053d94ad5" exitCode=143 Dec 04 06:30:47 crc kubenswrapper[4832]: I1204 06:30:47.077136 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9c2e869-61be-409d-ab69-60b0d6e87ca6","Type":"ContainerStarted","Data":"4dee36ac56fab7c4baf0317490acd32b0d7e3dd45acd0b015a16068dfd704e62"} Dec 04 06:30:47 crc kubenswrapper[4832]: I1204 06:30:47.077453 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9c2e869-61be-409d-ab69-60b0d6e87ca6","Type":"ContainerDied","Data":"9ad911dad2eb20bd1b743587735a7b80b02628c76de130a635cd134053d94ad5"} Dec 04 06:30:47 crc kubenswrapper[4832]: I1204 06:30:47.080716 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a1db388-a224-4e6f-b54d-de7b0321a518","Type":"ContainerStarted","Data":"3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3"} Dec 04 06:30:47 crc kubenswrapper[4832]: I1204 06:30:47.083695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cdc03dbf-34b3-42a5-bb18-265f0faa8e30","Type":"ContainerStarted","Data":"9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7"} Dec 04 06:30:49 crc kubenswrapper[4832]: I1204 06:30:49.108552 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a1db388-a224-4e6f-b54d-de7b0321a518","Type":"ContainerStarted","Data":"0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a"} Dec 04 06:30:49 crc kubenswrapper[4832]: I1204 06:30:49.109602 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 06:30:49 crc kubenswrapper[4832]: I1204 06:30:49.154151 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.509156057 podStartE2EDuration="10.154126134s" podCreationTimestamp="2025-12-04 06:30:39 +0000 UTC" firstStartedPulling="2025-12-04 06:30:41.502613162 +0000 UTC m=+1297.115430868" lastFinishedPulling="2025-12-04 06:30:48.147583239 +0000 UTC m=+1303.760400945" observedRunningTime="2025-12-04 06:30:49.136723793 +0000 UTC m=+1304.749541499" watchObservedRunningTime="2025-12-04 06:30:49.154126134 +0000 UTC m=+1304.766943840" Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.119929 4832 generic.go:334] "Generic (PLEG): container finished" podID="4cdd38c2-1620-41ab-bb2e-7a82a7a0858e" containerID="a612f43ec383e04b6d664e1d71b42aeb185003c310a63c9f24ff85edc33ddd69" exitCode=0 Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.119989 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-64xph" event={"ID":"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e","Type":"ContainerDied","Data":"a612f43ec383e04b6d664e1d71b42aeb185003c310a63c9f24ff85edc33ddd69"} Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.275495 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.636946 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.637003 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.730593 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.730705 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.739341 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.739399 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.760162 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.761493 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.849873 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-958hb"] Dec 04 06:30:50 crc kubenswrapper[4832]: I1204 06:30:50.850122 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" podUID="cd410c96-2fee-471a-8807-257ea9328e20" containerName="dnsmasq-dns" containerID="cri-o://2af4948344d09db3fc13ce47a313ebb0004d542e2f844841473bb1b5a12693d1" gracePeriod=10 Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.151433 4832 generic.go:334] "Generic (PLEG): container finished" podID="cd410c96-2fee-471a-8807-257ea9328e20" containerID="2af4948344d09db3fc13ce47a313ebb0004d542e2f844841473bb1b5a12693d1" exitCode=0 Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.151557 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" event={"ID":"cd410c96-2fee-471a-8807-257ea9328e20","Type":"ContainerDied","Data":"2af4948344d09db3fc13ce47a313ebb0004d542e2f844841473bb1b5a12693d1"} Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.154275 4832 generic.go:334] "Generic (PLEG): container finished" podID="933ec7f2-4591-4b3c-b681-d97d7ef7d41d" containerID="e553fd559fcf2eb2a0c3dc8cb7bab8282ab44e3610cdf75f21922629a3127a45" exitCode=0 Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.154504 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9929d" event={"ID":"933ec7f2-4591-4b3c-b681-d97d7ef7d41d","Type":"ContainerDied","Data":"e553fd559fcf2eb2a0c3dc8cb7bab8282ab44e3610cdf75f21922629a3127a45"} Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.197663 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.496643 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.625954 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.660231 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-dns-svc\") pod \"cd410c96-2fee-471a-8807-257ea9328e20\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.660346 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-dns-swift-storage-0\") pod \"cd410c96-2fee-471a-8807-257ea9328e20\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.660463 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-ovsdbserver-nb\") pod \"cd410c96-2fee-471a-8807-257ea9328e20\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.660697 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd62x\" (UniqueName: \"kubernetes.io/projected/cd410c96-2fee-471a-8807-257ea9328e20-kube-api-access-qd62x\") pod \"cd410c96-2fee-471a-8807-257ea9328e20\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.660749 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-config\") pod \"cd410c96-2fee-471a-8807-257ea9328e20\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.660786 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-ovsdbserver-sb\") pod \"cd410c96-2fee-471a-8807-257ea9328e20\" (UID: \"cd410c96-2fee-471a-8807-257ea9328e20\") " Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.682635 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd410c96-2fee-471a-8807-257ea9328e20-kube-api-access-qd62x" (OuterVolumeSpecName: "kube-api-access-qd62x") pod "cd410c96-2fee-471a-8807-257ea9328e20" (UID: "cd410c96-2fee-471a-8807-257ea9328e20"). InnerVolumeSpecName "kube-api-access-qd62x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.719618 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd410c96-2fee-471a-8807-257ea9328e20" (UID: "cd410c96-2fee-471a-8807-257ea9328e20"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.726091 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-config" (OuterVolumeSpecName: "config") pod "cd410c96-2fee-471a-8807-257ea9328e20" (UID: "cd410c96-2fee-471a-8807-257ea9328e20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.726255 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd410c96-2fee-471a-8807-257ea9328e20" (UID: "cd410c96-2fee-471a-8807-257ea9328e20"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.729886 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cd410c96-2fee-471a-8807-257ea9328e20" (UID: "cd410c96-2fee-471a-8807-257ea9328e20"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.741938 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd410c96-2fee-471a-8807-257ea9328e20" (UID: "cd410c96-2fee-471a-8807-257ea9328e20"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.762616 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-combined-ca-bundle\") pod \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.762847 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-scripts\") pod \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.762942 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-config-data\") pod \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.763047 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24b6k\" (UniqueName: \"kubernetes.io/projected/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-kube-api-access-24b6k\") pod \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\" (UID: \"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e\") " Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.763502 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd62x\" (UniqueName: \"kubernetes.io/projected/cd410c96-2fee-471a-8807-257ea9328e20-kube-api-access-qd62x\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.763521 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.763533 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.763542 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.763551 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.763562 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd410c96-2fee-471a-8807-257ea9328e20-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.767019 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-scripts" (OuterVolumeSpecName: "scripts") pod "4cdd38c2-1620-41ab-bb2e-7a82a7a0858e" (UID: "4cdd38c2-1620-41ab-bb2e-7a82a7a0858e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.771585 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-kube-api-access-24b6k" (OuterVolumeSpecName: "kube-api-access-24b6k") pod "4cdd38c2-1620-41ab-bb2e-7a82a7a0858e" (UID: "4cdd38c2-1620-41ab-bb2e-7a82a7a0858e"). InnerVolumeSpecName "kube-api-access-24b6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.781942 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.792715 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-config-data" (OuterVolumeSpecName: "config-data") pod "4cdd38c2-1620-41ab-bb2e-7a82a7a0858e" (UID: "4cdd38c2-1620-41ab-bb2e-7a82a7a0858e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.794710 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cdd38c2-1620-41ab-bb2e-7a82a7a0858e" (UID: "4cdd38c2-1620-41ab-bb2e-7a82a7a0858e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.822606 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.865908 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.865949 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.865963 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24b6k\" (UniqueName: \"kubernetes.io/projected/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-kube-api-access-24b6k\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:51 crc kubenswrapper[4832]: I1204 06:30:51.865975 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.170153 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-64xph" event={"ID":"4cdd38c2-1620-41ab-bb2e-7a82a7a0858e","Type":"ContainerDied","Data":"21ec32530958cb13051d2355bc89e3ac1c545efbe782b0dd60ad39a8b178d4dd"} Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.170194 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21ec32530958cb13051d2355bc89e3ac1c545efbe782b0dd60ad39a8b178d4dd" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.170247 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-64xph" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.177842 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" event={"ID":"cd410c96-2fee-471a-8807-257ea9328e20","Type":"ContainerDied","Data":"d37f111298ac4e4de81d29f635676cde2a3d14e53ac2ed8df9416775812f2a7e"} Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.177915 4832 scope.go:117] "RemoveContainer" containerID="2af4948344d09db3fc13ce47a313ebb0004d542e2f844841473bb1b5a12693d1" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.178125 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-958hb" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.225603 4832 scope.go:117] "RemoveContainer" containerID="abc07c133cee71c5fd35acf629bf73d747585ec1cec97c6c06e14575507a7f43" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.260470 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-958hb"] Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.269811 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-958hb"] Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.295442 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.295735 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" containerName="nova-api-log" containerID="cri-o://7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4" gracePeriod=30 Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.296357 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" containerName="nova-api-api" containerID="cri-o://9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7" gracePeriod=30 Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.310496 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.647628 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.726117 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd410c96-2fee-471a-8807-257ea9328e20" path="/var/lib/kubelet/pods/cd410c96-2fee-471a-8807-257ea9328e20/volumes" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.784838 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6nzp\" (UniqueName: \"kubernetes.io/projected/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-kube-api-access-t6nzp\") pod \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.785017 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-combined-ca-bundle\") pod \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.785098 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-config-data\") pod \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.785235 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-scripts\") pod \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\" (UID: \"933ec7f2-4591-4b3c-b681-d97d7ef7d41d\") " Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.790471 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-kube-api-access-t6nzp" (OuterVolumeSpecName: "kube-api-access-t6nzp") pod "933ec7f2-4591-4b3c-b681-d97d7ef7d41d" (UID: "933ec7f2-4591-4b3c-b681-d97d7ef7d41d"). InnerVolumeSpecName "kube-api-access-t6nzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.791048 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-scripts" (OuterVolumeSpecName: "scripts") pod "933ec7f2-4591-4b3c-b681-d97d7ef7d41d" (UID: "933ec7f2-4591-4b3c-b681-d97d7ef7d41d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.813580 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "933ec7f2-4591-4b3c-b681-d97d7ef7d41d" (UID: "933ec7f2-4591-4b3c-b681-d97d7ef7d41d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.815493 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-config-data" (OuterVolumeSpecName: "config-data") pod "933ec7f2-4591-4b3c-b681-d97d7ef7d41d" (UID: "933ec7f2-4591-4b3c-b681-d97d7ef7d41d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.887469 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.887501 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.887510 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:52 crc kubenswrapper[4832]: I1204 06:30:52.887521 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6nzp\" (UniqueName: \"kubernetes.io/projected/933ec7f2-4591-4b3c-b681-d97d7ef7d41d-kube-api-access-t6nzp\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.189269 4832 generic.go:334] "Generic (PLEG): container finished" podID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" containerID="7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4" exitCode=143 Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.189343 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cdc03dbf-34b3-42a5-bb18-265f0faa8e30","Type":"ContainerDied","Data":"7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4"} Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.191069 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9929d" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.191076 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9929d" event={"ID":"933ec7f2-4591-4b3c-b681-d97d7ef7d41d","Type":"ContainerDied","Data":"e346e880ef49ca6327bbbd08b6d9082686ff31794737307bb47f5355f44d483f"} Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.191133 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e346e880ef49ca6327bbbd08b6d9082686ff31794737307bb47f5355f44d483f" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.259114 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 04 06:30:53 crc kubenswrapper[4832]: E1204 06:30:53.259547 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd410c96-2fee-471a-8807-257ea9328e20" containerName="dnsmasq-dns" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.259566 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd410c96-2fee-471a-8807-257ea9328e20" containerName="dnsmasq-dns" Dec 04 06:30:53 crc kubenswrapper[4832]: E1204 06:30:53.259602 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933ec7f2-4591-4b3c-b681-d97d7ef7d41d" containerName="nova-cell1-conductor-db-sync" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.259611 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="933ec7f2-4591-4b3c-b681-d97d7ef7d41d" containerName="nova-cell1-conductor-db-sync" Dec 04 06:30:53 crc kubenswrapper[4832]: E1204 06:30:53.259624 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cdd38c2-1620-41ab-bb2e-7a82a7a0858e" containerName="nova-manage" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.259630 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cdd38c2-1620-41ab-bb2e-7a82a7a0858e" containerName="nova-manage" Dec 04 06:30:53 crc kubenswrapper[4832]: E1204 06:30:53.259650 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd410c96-2fee-471a-8807-257ea9328e20" containerName="init" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.259658 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd410c96-2fee-471a-8807-257ea9328e20" containerName="init" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.259825 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cdd38c2-1620-41ab-bb2e-7a82a7a0858e" containerName="nova-manage" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.259850 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="933ec7f2-4591-4b3c-b681-d97d7ef7d41d" containerName="nova-cell1-conductor-db-sync" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.259865 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd410c96-2fee-471a-8807-257ea9328e20" containerName="dnsmasq-dns" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.260496 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.274191 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.277952 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.396557 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02a8c92-b3b4-4c91-8d11-e0937e4928e0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d02a8c92-b3b4-4c91-8d11-e0937e4928e0\") " pod="openstack/nova-cell1-conductor-0" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.396708 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02a8c92-b3b4-4c91-8d11-e0937e4928e0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d02a8c92-b3b4-4c91-8d11-e0937e4928e0\") " pod="openstack/nova-cell1-conductor-0" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.396741 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2htc8\" (UniqueName: \"kubernetes.io/projected/d02a8c92-b3b4-4c91-8d11-e0937e4928e0-kube-api-access-2htc8\") pod \"nova-cell1-conductor-0\" (UID: \"d02a8c92-b3b4-4c91-8d11-e0937e4928e0\") " pod="openstack/nova-cell1-conductor-0" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.498918 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02a8c92-b3b4-4c91-8d11-e0937e4928e0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d02a8c92-b3b4-4c91-8d11-e0937e4928e0\") " pod="openstack/nova-cell1-conductor-0" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.499318 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02a8c92-b3b4-4c91-8d11-e0937e4928e0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d02a8c92-b3b4-4c91-8d11-e0937e4928e0\") " pod="openstack/nova-cell1-conductor-0" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.499354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2htc8\" (UniqueName: \"kubernetes.io/projected/d02a8c92-b3b4-4c91-8d11-e0937e4928e0-kube-api-access-2htc8\") pod \"nova-cell1-conductor-0\" (UID: \"d02a8c92-b3b4-4c91-8d11-e0937e4928e0\") " pod="openstack/nova-cell1-conductor-0" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.512743 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02a8c92-b3b4-4c91-8d11-e0937e4928e0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d02a8c92-b3b4-4c91-8d11-e0937e4928e0\") " pod="openstack/nova-cell1-conductor-0" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.516099 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02a8c92-b3b4-4c91-8d11-e0937e4928e0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d02a8c92-b3b4-4c91-8d11-e0937e4928e0\") " pod="openstack/nova-cell1-conductor-0" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.517578 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2htc8\" (UniqueName: \"kubernetes.io/projected/d02a8c92-b3b4-4c91-8d11-e0937e4928e0-kube-api-access-2htc8\") pod \"nova-cell1-conductor-0\" (UID: \"d02a8c92-b3b4-4c91-8d11-e0937e4928e0\") " pod="openstack/nova-cell1-conductor-0" Dec 04 06:30:53 crc kubenswrapper[4832]: I1204 06:30:53.578664 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 04 06:30:54 crc kubenswrapper[4832]: I1204 06:30:54.108372 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 04 06:30:54 crc kubenswrapper[4832]: W1204 06:30:54.111880 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd02a8c92_b3b4_4c91_8d11_e0937e4928e0.slice/crio-fe0584b66bbb4b89c176110ce02ed22ba1f0e31726fe46f8adb0e8c0c287726c WatchSource:0}: Error finding container fe0584b66bbb4b89c176110ce02ed22ba1f0e31726fe46f8adb0e8c0c287726c: Status 404 returned error can't find the container with id fe0584b66bbb4b89c176110ce02ed22ba1f0e31726fe46f8adb0e8c0c287726c Dec 04 06:30:54 crc kubenswrapper[4832]: I1204 06:30:54.203255 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ce6686a3-5933-4fb9-920b-29d135f7e46f" containerName="nova-scheduler-scheduler" containerID="cri-o://dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949" gracePeriod=30 Dec 04 06:30:54 crc kubenswrapper[4832]: I1204 06:30:54.203582 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d02a8c92-b3b4-4c91-8d11-e0937e4928e0","Type":"ContainerStarted","Data":"fe0584b66bbb4b89c176110ce02ed22ba1f0e31726fe46f8adb0e8c0c287726c"} Dec 04 06:30:55 crc kubenswrapper[4832]: I1204 06:30:55.217069 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d02a8c92-b3b4-4c91-8d11-e0937e4928e0","Type":"ContainerStarted","Data":"a4875d6dcc2422f56d5771134563d1d272644bec7eb0f53a57cec12306497d46"} Dec 04 06:30:55 crc kubenswrapper[4832]: I1204 06:30:55.217793 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 04 06:30:55 crc kubenswrapper[4832]: I1204 06:30:55.235655 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.235635866 podStartE2EDuration="2.235635866s" podCreationTimestamp="2025-12-04 06:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:30:55.232314114 +0000 UTC m=+1310.845131820" watchObservedRunningTime="2025-12-04 06:30:55.235635866 +0000 UTC m=+1310.848453572" Dec 04 06:30:55 crc kubenswrapper[4832]: E1204 06:30:55.732810 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 04 06:30:55 crc kubenswrapper[4832]: E1204 06:30:55.734122 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 04 06:30:55 crc kubenswrapper[4832]: E1204 06:30:55.735648 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 04 06:30:55 crc kubenswrapper[4832]: E1204 06:30:55.735688 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ce6686a3-5933-4fb9-920b-29d135f7e46f" containerName="nova-scheduler-scheduler" Dec 04 06:30:57 crc kubenswrapper[4832]: I1204 06:30:57.786628 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 06:30:57 crc kubenswrapper[4832]: I1204 06:30:57.916890 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6686a3-5933-4fb9-920b-29d135f7e46f-combined-ca-bundle\") pod \"ce6686a3-5933-4fb9-920b-29d135f7e46f\" (UID: \"ce6686a3-5933-4fb9-920b-29d135f7e46f\") " Dec 04 06:30:57 crc kubenswrapper[4832]: I1204 06:30:57.917009 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g6lv\" (UniqueName: \"kubernetes.io/projected/ce6686a3-5933-4fb9-920b-29d135f7e46f-kube-api-access-5g6lv\") pod \"ce6686a3-5933-4fb9-920b-29d135f7e46f\" (UID: \"ce6686a3-5933-4fb9-920b-29d135f7e46f\") " Dec 04 06:30:57 crc kubenswrapper[4832]: I1204 06:30:57.917040 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce6686a3-5933-4fb9-920b-29d135f7e46f-config-data\") pod \"ce6686a3-5933-4fb9-920b-29d135f7e46f\" (UID: \"ce6686a3-5933-4fb9-920b-29d135f7e46f\") " Dec 04 06:30:57 crc kubenswrapper[4832]: I1204 06:30:57.935286 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce6686a3-5933-4fb9-920b-29d135f7e46f-kube-api-access-5g6lv" (OuterVolumeSpecName: "kube-api-access-5g6lv") pod "ce6686a3-5933-4fb9-920b-29d135f7e46f" (UID: "ce6686a3-5933-4fb9-920b-29d135f7e46f"). InnerVolumeSpecName "kube-api-access-5g6lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:57 crc kubenswrapper[4832]: I1204 06:30:57.945672 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce6686a3-5933-4fb9-920b-29d135f7e46f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce6686a3-5933-4fb9-920b-29d135f7e46f" (UID: "ce6686a3-5933-4fb9-920b-29d135f7e46f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:57 crc kubenswrapper[4832]: I1204 06:30:57.963551 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce6686a3-5933-4fb9-920b-29d135f7e46f-config-data" (OuterVolumeSpecName: "config-data") pod "ce6686a3-5933-4fb9-920b-29d135f7e46f" (UID: "ce6686a3-5933-4fb9-920b-29d135f7e46f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.019818 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6686a3-5933-4fb9-920b-29d135f7e46f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.019852 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g6lv\" (UniqueName: \"kubernetes.io/projected/ce6686a3-5933-4fb9-920b-29d135f7e46f-kube-api-access-5g6lv\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.019866 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce6686a3-5933-4fb9-920b-29d135f7e46f-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.176509 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.247059 4832 generic.go:334] "Generic (PLEG): container finished" podID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" containerID="9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7" exitCode=0 Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.247116 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.247142 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cdc03dbf-34b3-42a5-bb18-265f0faa8e30","Type":"ContainerDied","Data":"9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7"} Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.247186 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cdc03dbf-34b3-42a5-bb18-265f0faa8e30","Type":"ContainerDied","Data":"d0569a7434d85bd0ca2a5331899f66c5bb8b86828bd30f92fd602cbc0c4b6b7d"} Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.247209 4832 scope.go:117] "RemoveContainer" containerID="9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.248726 4832 generic.go:334] "Generic (PLEG): container finished" podID="ce6686a3-5933-4fb9-920b-29d135f7e46f" containerID="dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949" exitCode=0 Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.248757 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce6686a3-5933-4fb9-920b-29d135f7e46f","Type":"ContainerDied","Data":"dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949"} Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.248778 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ce6686a3-5933-4fb9-920b-29d135f7e46f","Type":"ContainerDied","Data":"ac8db24e0d6e70d499fdf5bff4237b62bed552f914f1e8e8f8fd3832f6a8d349"} Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.248796 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.275480 4832 scope.go:117] "RemoveContainer" containerID="7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.288597 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.297802 4832 scope.go:117] "RemoveContainer" containerID="9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7" Dec 04 06:30:58 crc kubenswrapper[4832]: E1204 06:30:58.298343 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7\": container with ID starting with 9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7 not found: ID does not exist" containerID="9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.298372 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7"} err="failed to get container status \"9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7\": rpc error: code = NotFound desc = could not find container \"9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7\": container with ID starting with 9adcb2df1deed64ffdf2a0bdae0da7057f0dfac916c40a47a5bebc82e187bbb7 not found: ID does not exist" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.298539 4832 scope.go:117] "RemoveContainer" containerID="7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4" Dec 04 06:30:58 crc kubenswrapper[4832]: E1204 06:30:58.298906 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4\": container with ID starting with 7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4 not found: ID does not exist" containerID="7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.298963 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4"} err="failed to get container status \"7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4\": rpc error: code = NotFound desc = could not find container \"7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4\": container with ID starting with 7a46417f97ec443ac31f901b847d6340c1ff4c1548838efc8ff3335f7e60d7e4 not found: ID does not exist" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.298995 4832 scope.go:117] "RemoveContainer" containerID="dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.302445 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.317070 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:30:58 crc kubenswrapper[4832]: E1204 06:30:58.317607 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6686a3-5933-4fb9-920b-29d135f7e46f" containerName="nova-scheduler-scheduler" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.317626 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6686a3-5933-4fb9-920b-29d135f7e46f" containerName="nova-scheduler-scheduler" Dec 04 06:30:58 crc kubenswrapper[4832]: E1204 06:30:58.317644 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" containerName="nova-api-log" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.317651 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" containerName="nova-api-log" Dec 04 06:30:58 crc kubenswrapper[4832]: E1204 06:30:58.317664 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" containerName="nova-api-api" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.317670 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" containerName="nova-api-api" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.317862 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" containerName="nova-api-api" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.317885 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6686a3-5933-4fb9-920b-29d135f7e46f" containerName="nova-scheduler-scheduler" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.317897 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" containerName="nova-api-log" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.318648 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.322371 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.325499 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnrxr\" (UniqueName: \"kubernetes.io/projected/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-kube-api-access-bnrxr\") pod \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.325691 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-config-data\") pod \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.325772 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-combined-ca-bundle\") pod \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.325823 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-logs\") pod \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\" (UID: \"cdc03dbf-34b3-42a5-bb18-265f0faa8e30\") " Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.326842 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-logs" (OuterVolumeSpecName: "logs") pod "cdc03dbf-34b3-42a5-bb18-265f0faa8e30" (UID: "cdc03dbf-34b3-42a5-bb18-265f0faa8e30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.329693 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-kube-api-access-bnrxr" (OuterVolumeSpecName: "kube-api-access-bnrxr") pod "cdc03dbf-34b3-42a5-bb18-265f0faa8e30" (UID: "cdc03dbf-34b3-42a5-bb18-265f0faa8e30"). InnerVolumeSpecName "kube-api-access-bnrxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.332831 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.352813 4832 scope.go:117] "RemoveContainer" containerID="dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949" Dec 04 06:30:58 crc kubenswrapper[4832]: E1204 06:30:58.356763 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949\": container with ID starting with dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949 not found: ID does not exist" containerID="dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.356821 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949"} err="failed to get container status \"dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949\": rpc error: code = NotFound desc = could not find container \"dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949\": container with ID starting with dc8c403dc6cd62bf117f7daac8c0b938040096906c0968642e36163c7f8a5949 not found: ID does not exist" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.362244 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdc03dbf-34b3-42a5-bb18-265f0faa8e30" (UID: "cdc03dbf-34b3-42a5-bb18-265f0faa8e30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.380554 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-config-data" (OuterVolumeSpecName: "config-data") pod "cdc03dbf-34b3-42a5-bb18-265f0faa8e30" (UID: "cdc03dbf-34b3-42a5-bb18-265f0faa8e30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.428411 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brrm\" (UniqueName: \"kubernetes.io/projected/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-kube-api-access-5brrm\") pod \"nova-scheduler-0\" (UID: \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.428519 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-config-data\") pod \"nova-scheduler-0\" (UID: \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.428580 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.428746 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.428768 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.428780 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnrxr\" (UniqueName: \"kubernetes.io/projected/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-kube-api-access-bnrxr\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.428790 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc03dbf-34b3-42a5-bb18-265f0faa8e30-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.530757 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-config-data\") pod \"nova-scheduler-0\" (UID: \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.530841 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.530931 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5brrm\" (UniqueName: \"kubernetes.io/projected/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-kube-api-access-5brrm\") pod \"nova-scheduler-0\" (UID: \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.541508 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.541527 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-config-data\") pod \"nova-scheduler-0\" (UID: \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.547204 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5brrm\" (UniqueName: \"kubernetes.io/projected/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-kube-api-access-5brrm\") pod \"nova-scheduler-0\" (UID: \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\") " pod="openstack/nova-scheduler-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.636230 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.645425 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.657492 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.659148 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.664646 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.727248 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdc03dbf-34b3-42a5-bb18-265f0faa8e30" path="/var/lib/kubelet/pods/cdc03dbf-34b3-42a5-bb18-265f0faa8e30/volumes" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.727923 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce6686a3-5933-4fb9-920b-29d135f7e46f" path="/var/lib/kubelet/pods/ce6686a3-5933-4fb9-920b-29d135f7e46f/volumes" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.728591 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.768869 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.836530 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crsbm\" (UniqueName: \"kubernetes.io/projected/d93904fb-9acc-419d-9d7d-3e0effe0d457-kube-api-access-crsbm\") pod \"nova-api-0\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.836857 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93904fb-9acc-419d-9d7d-3e0effe0d457-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.837103 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d93904fb-9acc-419d-9d7d-3e0effe0d457-logs\") pod \"nova-api-0\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.837157 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93904fb-9acc-419d-9d7d-3e0effe0d457-config-data\") pod \"nova-api-0\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.938948 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d93904fb-9acc-419d-9d7d-3e0effe0d457-logs\") pod \"nova-api-0\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.939036 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93904fb-9acc-419d-9d7d-3e0effe0d457-config-data\") pod \"nova-api-0\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.939104 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crsbm\" (UniqueName: \"kubernetes.io/projected/d93904fb-9acc-419d-9d7d-3e0effe0d457-kube-api-access-crsbm\") pod \"nova-api-0\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.939132 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93904fb-9acc-419d-9d7d-3e0effe0d457-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.939501 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d93904fb-9acc-419d-9d7d-3e0effe0d457-logs\") pod \"nova-api-0\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.950280 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93904fb-9acc-419d-9d7d-3e0effe0d457-config-data\") pod \"nova-api-0\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.951611 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93904fb-9acc-419d-9d7d-3e0effe0d457-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.959196 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crsbm\" (UniqueName: \"kubernetes.io/projected/d93904fb-9acc-419d-9d7d-3e0effe0d457-kube-api-access-crsbm\") pod \"nova-api-0\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " pod="openstack/nova-api-0" Dec 04 06:30:58 crc kubenswrapper[4832]: I1204 06:30:58.978126 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:30:59 crc kubenswrapper[4832]: I1204 06:30:59.261577 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:30:59 crc kubenswrapper[4832]: W1204 06:30:59.265336 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffea2af5_d5b2_4ac9_ba03_82606dd1cccf.slice/crio-544e9e6ebf50657cfcab76a468dabebe213f370180138f3846f4534abbb8609b WatchSource:0}: Error finding container 544e9e6ebf50657cfcab76a468dabebe213f370180138f3846f4534abbb8609b: Status 404 returned error can't find the container with id 544e9e6ebf50657cfcab76a468dabebe213f370180138f3846f4534abbb8609b Dec 04 06:30:59 crc kubenswrapper[4832]: I1204 06:30:59.449000 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:30:59 crc kubenswrapper[4832]: W1204 06:30:59.458108 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd93904fb_9acc_419d_9d7d_3e0effe0d457.slice/crio-8a087a380d1850d1c985c9d0dac023aba15d12ee78366307f6dc97daec21ef4c WatchSource:0}: Error finding container 8a087a380d1850d1c985c9d0dac023aba15d12ee78366307f6dc97daec21ef4c: Status 404 returned error can't find the container with id 8a087a380d1850d1c985c9d0dac023aba15d12ee78366307f6dc97daec21ef4c Dec 04 06:31:00 crc kubenswrapper[4832]: I1204 06:31:00.274825 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d93904fb-9acc-419d-9d7d-3e0effe0d457","Type":"ContainerStarted","Data":"ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16"} Dec 04 06:31:00 crc kubenswrapper[4832]: I1204 06:31:00.275161 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d93904fb-9acc-419d-9d7d-3e0effe0d457","Type":"ContainerStarted","Data":"afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266"} Dec 04 06:31:00 crc kubenswrapper[4832]: I1204 06:31:00.275178 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d93904fb-9acc-419d-9d7d-3e0effe0d457","Type":"ContainerStarted","Data":"8a087a380d1850d1c985c9d0dac023aba15d12ee78366307f6dc97daec21ef4c"} Dec 04 06:31:00 crc kubenswrapper[4832]: I1204 06:31:00.276490 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf","Type":"ContainerStarted","Data":"a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909"} Dec 04 06:31:00 crc kubenswrapper[4832]: I1204 06:31:00.276525 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf","Type":"ContainerStarted","Data":"544e9e6ebf50657cfcab76a468dabebe213f370180138f3846f4534abbb8609b"} Dec 04 06:31:00 crc kubenswrapper[4832]: I1204 06:31:00.302782 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.302763098 podStartE2EDuration="2.302763098s" podCreationTimestamp="2025-12-04 06:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:31:00.294463422 +0000 UTC m=+1315.907281138" watchObservedRunningTime="2025-12-04 06:31:00.302763098 +0000 UTC m=+1315.915580804" Dec 04 06:31:00 crc kubenswrapper[4832]: I1204 06:31:00.318771 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.318747094 podStartE2EDuration="2.318747094s" podCreationTimestamp="2025-12-04 06:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:31:00.3129392 +0000 UTC m=+1315.925756906" watchObservedRunningTime="2025-12-04 06:31:00.318747094 +0000 UTC m=+1315.931564800" Dec 04 06:31:03 crc kubenswrapper[4832]: I1204 06:31:03.606426 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 04 06:31:03 crc kubenswrapper[4832]: I1204 06:31:03.769500 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 04 06:31:08 crc kubenswrapper[4832]: I1204 06:31:08.769748 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 04 06:31:08 crc kubenswrapper[4832]: I1204 06:31:08.798869 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 04 06:31:08 crc kubenswrapper[4832]: I1204 06:31:08.979629 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 06:31:08 crc kubenswrapper[4832]: I1204 06:31:08.979672 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 06:31:09 crc kubenswrapper[4832]: I1204 06:31:09.386026 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 04 06:31:10 crc kubenswrapper[4832]: I1204 06:31:10.065714 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d93904fb-9acc-419d-9d7d-3e0effe0d457" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 04 06:31:10 crc kubenswrapper[4832]: I1204 06:31:10.065831 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d93904fb-9acc-419d-9d7d-3e0effe0d457" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 04 06:31:10 crc kubenswrapper[4832]: I1204 06:31:10.705910 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 04 06:31:14 crc kubenswrapper[4832]: I1204 06:31:14.321633 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 06:31:14 crc kubenswrapper[4832]: I1204 06:31:14.322579 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="60daac54-910d-4a74-8a05-ab520ea21cab" containerName="kube-state-metrics" containerID="cri-o://1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284" gracePeriod=30 Dec 04 06:31:14 crc kubenswrapper[4832]: I1204 06:31:14.844905 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 04 06:31:14 crc kubenswrapper[4832]: I1204 06:31:14.970605 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp469\" (UniqueName: \"kubernetes.io/projected/60daac54-910d-4a74-8a05-ab520ea21cab-kube-api-access-zp469\") pod \"60daac54-910d-4a74-8a05-ab520ea21cab\" (UID: \"60daac54-910d-4a74-8a05-ab520ea21cab\") " Dec 04 06:31:14 crc kubenswrapper[4832]: I1204 06:31:14.978372 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60daac54-910d-4a74-8a05-ab520ea21cab-kube-api-access-zp469" (OuterVolumeSpecName: "kube-api-access-zp469") pod "60daac54-910d-4a74-8a05-ab520ea21cab" (UID: "60daac54-910d-4a74-8a05-ab520ea21cab"). InnerVolumeSpecName "kube-api-access-zp469". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.073681 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp469\" (UniqueName: \"kubernetes.io/projected/60daac54-910d-4a74-8a05-ab520ea21cab-kube-api-access-zp469\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.418483 4832 generic.go:334] "Generic (PLEG): container finished" podID="60daac54-910d-4a74-8a05-ab520ea21cab" containerID="1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284" exitCode=2 Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.418541 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.418540 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"60daac54-910d-4a74-8a05-ab520ea21cab","Type":"ContainerDied","Data":"1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284"} Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.418682 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"60daac54-910d-4a74-8a05-ab520ea21cab","Type":"ContainerDied","Data":"c20613d91f7178771e1a2ef1cd50a324012b2435eb66b4566fed941b5365fd05"} Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.418721 4832 scope.go:117] "RemoveContainer" containerID="1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.450370 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.453217 4832 scope.go:117] "RemoveContainer" containerID="1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.460425 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 06:31:15 crc kubenswrapper[4832]: E1204 06:31:15.461931 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284\": container with ID starting with 1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284 not found: ID does not exist" containerID="1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.461984 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284"} err="failed to get container status \"1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284\": rpc error: code = NotFound desc = could not find container \"1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284\": container with ID starting with 1f92712c908501ea3510caf0341552cc8929362a7525c2d834b3ecc4dfdae284 not found: ID does not exist" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.484368 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 06:31:15 crc kubenswrapper[4832]: E1204 06:31:15.484891 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60daac54-910d-4a74-8a05-ab520ea21cab" containerName="kube-state-metrics" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.484916 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="60daac54-910d-4a74-8a05-ab520ea21cab" containerName="kube-state-metrics" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.485196 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="60daac54-910d-4a74-8a05-ab520ea21cab" containerName="kube-state-metrics" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.486079 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.489701 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.490155 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.496722 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.586331 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f66227-3513-4327-81ec-2f1f147294e8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a8f66227-3513-4327-81ec-2f1f147294e8\") " pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.586468 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f66227-3513-4327-81ec-2f1f147294e8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a8f66227-3513-4327-81ec-2f1f147294e8\") " pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.586575 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a8f66227-3513-4327-81ec-2f1f147294e8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a8f66227-3513-4327-81ec-2f1f147294e8\") " pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.586607 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r758s\" (UniqueName: \"kubernetes.io/projected/a8f66227-3513-4327-81ec-2f1f147294e8-kube-api-access-r758s\") pod \"kube-state-metrics-0\" (UID: \"a8f66227-3513-4327-81ec-2f1f147294e8\") " pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.688503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a8f66227-3513-4327-81ec-2f1f147294e8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a8f66227-3513-4327-81ec-2f1f147294e8\") " pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.688580 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r758s\" (UniqueName: \"kubernetes.io/projected/a8f66227-3513-4327-81ec-2f1f147294e8-kube-api-access-r758s\") pod \"kube-state-metrics-0\" (UID: \"a8f66227-3513-4327-81ec-2f1f147294e8\") " pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.688700 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f66227-3513-4327-81ec-2f1f147294e8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a8f66227-3513-4327-81ec-2f1f147294e8\") " pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.688801 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f66227-3513-4327-81ec-2f1f147294e8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a8f66227-3513-4327-81ec-2f1f147294e8\") " pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.698162 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f66227-3513-4327-81ec-2f1f147294e8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a8f66227-3513-4327-81ec-2f1f147294e8\") " pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.698218 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f66227-3513-4327-81ec-2f1f147294e8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a8f66227-3513-4327-81ec-2f1f147294e8\") " pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.714044 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r758s\" (UniqueName: \"kubernetes.io/projected/a8f66227-3513-4327-81ec-2f1f147294e8-kube-api-access-r758s\") pod \"kube-state-metrics-0\" (UID: \"a8f66227-3513-4327-81ec-2f1f147294e8\") " pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.714938 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a8f66227-3513-4327-81ec-2f1f147294e8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a8f66227-3513-4327-81ec-2f1f147294e8\") " pod="openstack/kube-state-metrics-0" Dec 04 06:31:15 crc kubenswrapper[4832]: I1204 06:31:15.812481 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.317105 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.322798 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.440278 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8f66227-3513-4327-81ec-2f1f147294e8","Type":"ContainerStarted","Data":"49c47c39a3c7c9038d43f24b27008f08af105e5c3bda7f05b01e8959e747142b"} Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.448908 4832 generic.go:334] "Generic (PLEG): container finished" podID="c1d7586a-ce09-409b-ac0b-a310bf90dec0" containerID="f5ed1980a10505655ea7a7f8be6421bfa12bb63204a5aa5edfa3c52301f87744" exitCode=137 Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.449005 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c1d7586a-ce09-409b-ac0b-a310bf90dec0","Type":"ContainerDied","Data":"f5ed1980a10505655ea7a7f8be6421bfa12bb63204a5aa5edfa3c52301f87744"} Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.453667 4832 generic.go:334] "Generic (PLEG): container finished" podID="d9c2e869-61be-409d-ab69-60b0d6e87ca6" containerID="4dee36ac56fab7c4baf0317490acd32b0d7e3dd45acd0b015a16068dfd704e62" exitCode=137 Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.453700 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9c2e869-61be-409d-ab69-60b0d6e87ca6","Type":"ContainerDied","Data":"4dee36ac56fab7c4baf0317490acd32b0d7e3dd45acd0b015a16068dfd704e62"} Dec 04 06:31:16 crc kubenswrapper[4832]: E1204 06:31:16.484490 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9c2e869_61be_409d_ab69_60b0d6e87ca6.slice/crio-conmon-4dee36ac56fab7c4baf0317490acd32b0d7e3dd45acd0b015a16068dfd704e62.scope\": RecentStats: unable to find data in memory cache]" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.515868 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.599454 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.600278 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="ceilometer-central-agent" containerID="cri-o://970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e" gracePeriod=30 Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.600453 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="proxy-httpd" containerID="cri-o://0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a" gracePeriod=30 Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.600516 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="sg-core" containerID="cri-o://3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3" gracePeriod=30 Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.600563 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="ceilometer-notification-agent" containerID="cri-o://f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c" gracePeriod=30 Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.615832 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c2e869-61be-409d-ab69-60b0d6e87ca6-config-data\") pod \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.615984 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgp58\" (UniqueName: \"kubernetes.io/projected/d9c2e869-61be-409d-ab69-60b0d6e87ca6-kube-api-access-dgp58\") pod \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.616137 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c2e869-61be-409d-ab69-60b0d6e87ca6-logs\") pod \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.616260 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c2e869-61be-409d-ab69-60b0d6e87ca6-combined-ca-bundle\") pod \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\" (UID: \"d9c2e869-61be-409d-ab69-60b0d6e87ca6\") " Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.617048 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9c2e869-61be-409d-ab69-60b0d6e87ca6-logs" (OuterVolumeSpecName: "logs") pod "d9c2e869-61be-409d-ab69-60b0d6e87ca6" (UID: "d9c2e869-61be-409d-ab69-60b0d6e87ca6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.619575 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.634172 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c2e869-61be-409d-ab69-60b0d6e87ca6-kube-api-access-dgp58" (OuterVolumeSpecName: "kube-api-access-dgp58") pod "d9c2e869-61be-409d-ab69-60b0d6e87ca6" (UID: "d9c2e869-61be-409d-ab69-60b0d6e87ca6"). InnerVolumeSpecName "kube-api-access-dgp58". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.664810 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c2e869-61be-409d-ab69-60b0d6e87ca6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9c2e869-61be-409d-ab69-60b0d6e87ca6" (UID: "d9c2e869-61be-409d-ab69-60b0d6e87ca6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.669084 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c2e869-61be-409d-ab69-60b0d6e87ca6-config-data" (OuterVolumeSpecName: "config-data") pod "d9c2e869-61be-409d-ab69-60b0d6e87ca6" (UID: "d9c2e869-61be-409d-ab69-60b0d6e87ca6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.718920 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1d7586a-ce09-409b-ac0b-a310bf90dec0-config-data\") pod \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\" (UID: \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\") " Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.719033 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c729f\" (UniqueName: \"kubernetes.io/projected/c1d7586a-ce09-409b-ac0b-a310bf90dec0-kube-api-access-c729f\") pod \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\" (UID: \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\") " Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.719415 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d7586a-ce09-409b-ac0b-a310bf90dec0-combined-ca-bundle\") pod \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\" (UID: \"c1d7586a-ce09-409b-ac0b-a310bf90dec0\") " Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.721861 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9c2e869-61be-409d-ab69-60b0d6e87ca6-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.721895 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c2e869-61be-409d-ab69-60b0d6e87ca6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.721914 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9c2e869-61be-409d-ab69-60b0d6e87ca6-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.721935 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgp58\" (UniqueName: \"kubernetes.io/projected/d9c2e869-61be-409d-ab69-60b0d6e87ca6-kube-api-access-dgp58\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.730266 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d7586a-ce09-409b-ac0b-a310bf90dec0-kube-api-access-c729f" (OuterVolumeSpecName: "kube-api-access-c729f") pod "c1d7586a-ce09-409b-ac0b-a310bf90dec0" (UID: "c1d7586a-ce09-409b-ac0b-a310bf90dec0"). InnerVolumeSpecName "kube-api-access-c729f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.742698 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60daac54-910d-4a74-8a05-ab520ea21cab" path="/var/lib/kubelet/pods/60daac54-910d-4a74-8a05-ab520ea21cab/volumes" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.767843 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1d7586a-ce09-409b-ac0b-a310bf90dec0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1d7586a-ce09-409b-ac0b-a310bf90dec0" (UID: "c1d7586a-ce09-409b-ac0b-a310bf90dec0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.768192 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1d7586a-ce09-409b-ac0b-a310bf90dec0-config-data" (OuterVolumeSpecName: "config-data") pod "c1d7586a-ce09-409b-ac0b-a310bf90dec0" (UID: "c1d7586a-ce09-409b-ac0b-a310bf90dec0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.824187 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1d7586a-ce09-409b-ac0b-a310bf90dec0-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.824222 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c729f\" (UniqueName: \"kubernetes.io/projected/c1d7586a-ce09-409b-ac0b-a310bf90dec0-kube-api-access-c729f\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:16 crc kubenswrapper[4832]: I1204 06:31:16.824235 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1d7586a-ce09-409b-ac0b-a310bf90dec0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.472904 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c1d7586a-ce09-409b-ac0b-a310bf90dec0","Type":"ContainerDied","Data":"ff0496e577bd656461baa10b991087f537502e5ee800f0f60042d8831413ca97"} Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.473290 4832 scope.go:117] "RemoveContainer" containerID="f5ed1980a10505655ea7a7f8be6421bfa12bb63204a5aa5edfa3c52301f87744" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.473460 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.482829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9c2e869-61be-409d-ab69-60b0d6e87ca6","Type":"ContainerDied","Data":"392849d306434753d894c58b2e8c9360bcaee78f90feaad295476107b0c68061"} Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.482847 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.489039 4832 generic.go:334] "Generic (PLEG): container finished" podID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerID="0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a" exitCode=0 Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.489080 4832 generic.go:334] "Generic (PLEG): container finished" podID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerID="3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3" exitCode=2 Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.489091 4832 generic.go:334] "Generic (PLEG): container finished" podID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerID="970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e" exitCode=0 Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.489205 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a1db388-a224-4e6f-b54d-de7b0321a518","Type":"ContainerDied","Data":"0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a"} Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.489240 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a1db388-a224-4e6f-b54d-de7b0321a518","Type":"ContainerDied","Data":"3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3"} Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.489251 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a1db388-a224-4e6f-b54d-de7b0321a518","Type":"ContainerDied","Data":"970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e"} Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.500189 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8f66227-3513-4327-81ec-2f1f147294e8","Type":"ContainerStarted","Data":"8439b2c76a5f695b0ef46fd191a2e9c73b64dcc9237f3674371c6a28b0d7a4c1"} Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.500542 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.512443 4832 scope.go:117] "RemoveContainer" containerID="4dee36ac56fab7c4baf0317490acd32b0d7e3dd45acd0b015a16068dfd704e62" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.522313 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.539478 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.560866 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.572640 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.587719 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 06:31:17 crc kubenswrapper[4832]: E1204 06:31:17.588663 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c2e869-61be-409d-ab69-60b0d6e87ca6" containerName="nova-metadata-log" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.588687 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c2e869-61be-409d-ab69-60b0d6e87ca6" containerName="nova-metadata-log" Dec 04 06:31:17 crc kubenswrapper[4832]: E1204 06:31:17.588707 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d7586a-ce09-409b-ac0b-a310bf90dec0" containerName="nova-cell1-novncproxy-novncproxy" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.588718 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d7586a-ce09-409b-ac0b-a310bf90dec0" containerName="nova-cell1-novncproxy-novncproxy" Dec 04 06:31:17 crc kubenswrapper[4832]: E1204 06:31:17.588757 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c2e869-61be-409d-ab69-60b0d6e87ca6" containerName="nova-metadata-metadata" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.588769 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c2e869-61be-409d-ab69-60b0d6e87ca6" containerName="nova-metadata-metadata" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.589017 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c2e869-61be-409d-ab69-60b0d6e87ca6" containerName="nova-metadata-log" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.589039 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1d7586a-ce09-409b-ac0b-a310bf90dec0" containerName="nova-cell1-novncproxy-novncproxy" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.590020 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c2e869-61be-409d-ab69-60b0d6e87ca6" containerName="nova-metadata-metadata" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.591285 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.595658 4832 scope.go:117] "RemoveContainer" containerID="9ad911dad2eb20bd1b743587735a7b80b02628c76de130a635cd134053d94ad5" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.598528 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.598708 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.598826 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.614315 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.616203 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.623483 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.623685 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.628522 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.639757 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.061183793 podStartE2EDuration="2.639725504s" podCreationTimestamp="2025-12-04 06:31:15 +0000 UTC" firstStartedPulling="2025-12-04 06:31:16.322587274 +0000 UTC m=+1331.935404980" lastFinishedPulling="2025-12-04 06:31:16.901128985 +0000 UTC m=+1332.513946691" observedRunningTime="2025-12-04 06:31:17.582838133 +0000 UTC m=+1333.195655839" watchObservedRunningTime="2025-12-04 06:31:17.639725504 +0000 UTC m=+1333.252543210" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.641325 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgpjj\" (UniqueName: \"kubernetes.io/projected/fae96e75-8064-475a-9b3f-9b68932ff076-kube-api-access-qgpjj\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.641615 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-config-data\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.641760 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.656531 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.656734 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fae96e75-8064-475a-9b3f-9b68932ff076-logs\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.656912 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.657058 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.657218 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.657330 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpcmb\" (UniqueName: \"kubernetes.io/projected/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-kube-api-access-mpcmb\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.657479 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.692120 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.759759 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.759844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.759871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpcmb\" (UniqueName: \"kubernetes.io/projected/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-kube-api-access-mpcmb\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.759930 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.759978 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgpjj\" (UniqueName: \"kubernetes.io/projected/fae96e75-8064-475a-9b3f-9b68932ff076-kube-api-access-qgpjj\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.760083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-config-data\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.760141 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.760192 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.760234 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fae96e75-8064-475a-9b3f-9b68932ff076-logs\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.760321 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.766498 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.767141 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-config-data\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.769718 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fae96e75-8064-475a-9b3f-9b68932ff076-logs\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.772225 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.772722 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.773676 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.773700 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.778055 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.784473 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgpjj\" (UniqueName: \"kubernetes.io/projected/fae96e75-8064-475a-9b3f-9b68932ff076-kube-api-access-qgpjj\") pod \"nova-metadata-0\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " pod="openstack/nova-metadata-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.789619 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpcmb\" (UniqueName: \"kubernetes.io/projected/1c696d5d-0ba4-4406-aa1e-ef9d99df8136-kube-api-access-mpcmb\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c696d5d-0ba4-4406-aa1e-ef9d99df8136\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.923711 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:17 crc kubenswrapper[4832]: I1204 06:31:17.958907 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.468856 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 06:31:18 crc kubenswrapper[4832]: W1204 06:31:18.474193 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c696d5d_0ba4_4406_aa1e_ef9d99df8136.slice/crio-6458873382d321bd623061d01b79c6ce55fd406f367a745271b754e719aeab0f WatchSource:0}: Error finding container 6458873382d321bd623061d01b79c6ce55fd406f367a745271b754e719aeab0f: Status 404 returned error can't find the container with id 6458873382d321bd623061d01b79c6ce55fd406f367a745271b754e719aeab0f Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.479674 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.559768 4832 generic.go:334] "Generic (PLEG): container finished" podID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerID="f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c" exitCode=0 Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.559836 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.559891 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a1db388-a224-4e6f-b54d-de7b0321a518","Type":"ContainerDied","Data":"f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c"} Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.559991 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a1db388-a224-4e6f-b54d-de7b0321a518","Type":"ContainerDied","Data":"c4a05a384519deb94a0a924bdcda3a02e2ee7e1c74aafcafc35d8c8e16696064"} Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.560026 4832 scope.go:117] "RemoveContainer" containerID="0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.565642 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1c696d5d-0ba4-4406-aa1e-ef9d99df8136","Type":"ContainerStarted","Data":"6458873382d321bd623061d01b79c6ce55fd406f367a745271b754e719aeab0f"} Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.569898 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.576129 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1db388-a224-4e6f-b54d-de7b0321a518-log-httpd\") pod \"3a1db388-a224-4e6f-b54d-de7b0321a518\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.576208 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1db388-a224-4e6f-b54d-de7b0321a518-run-httpd\") pod \"3a1db388-a224-4e6f-b54d-de7b0321a518\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.576373 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-config-data\") pod \"3a1db388-a224-4e6f-b54d-de7b0321a518\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.576501 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-scripts\") pod \"3a1db388-a224-4e6f-b54d-de7b0321a518\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.576535 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-combined-ca-bundle\") pod \"3a1db388-a224-4e6f-b54d-de7b0321a518\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.576603 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-sg-core-conf-yaml\") pod \"3a1db388-a224-4e6f-b54d-de7b0321a518\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.576655 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5gp8\" (UniqueName: \"kubernetes.io/projected/3a1db388-a224-4e6f-b54d-de7b0321a518-kube-api-access-x5gp8\") pod \"3a1db388-a224-4e6f-b54d-de7b0321a518\" (UID: \"3a1db388-a224-4e6f-b54d-de7b0321a518\") " Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.578803 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a1db388-a224-4e6f-b54d-de7b0321a518-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3a1db388-a224-4e6f-b54d-de7b0321a518" (UID: "3a1db388-a224-4e6f-b54d-de7b0321a518"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.580875 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a1db388-a224-4e6f-b54d-de7b0321a518-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3a1db388-a224-4e6f-b54d-de7b0321a518" (UID: "3a1db388-a224-4e6f-b54d-de7b0321a518"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.585613 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1db388-a224-4e6f-b54d-de7b0321a518-kube-api-access-x5gp8" (OuterVolumeSpecName: "kube-api-access-x5gp8") pod "3a1db388-a224-4e6f-b54d-de7b0321a518" (UID: "3a1db388-a224-4e6f-b54d-de7b0321a518"). InnerVolumeSpecName "kube-api-access-x5gp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.587942 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-scripts" (OuterVolumeSpecName: "scripts") pod "3a1db388-a224-4e6f-b54d-de7b0321a518" (UID: "3a1db388-a224-4e6f-b54d-de7b0321a518"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.613004 4832 scope.go:117] "RemoveContainer" containerID="3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.621753 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3a1db388-a224-4e6f-b54d-de7b0321a518" (UID: "3a1db388-a224-4e6f-b54d-de7b0321a518"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.664051 4832 scope.go:117] "RemoveContainer" containerID="f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.678951 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.678985 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.679019 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5gp8\" (UniqueName: \"kubernetes.io/projected/3a1db388-a224-4e6f-b54d-de7b0321a518-kube-api-access-x5gp8\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.679036 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1db388-a224-4e6f-b54d-de7b0321a518-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.679055 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a1db388-a224-4e6f-b54d-de7b0321a518-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.690093 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a1db388-a224-4e6f-b54d-de7b0321a518" (UID: "3a1db388-a224-4e6f-b54d-de7b0321a518"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.700976 4832 scope.go:117] "RemoveContainer" containerID="970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.724268 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1d7586a-ce09-409b-ac0b-a310bf90dec0" path="/var/lib/kubelet/pods/c1d7586a-ce09-409b-ac0b-a310bf90dec0/volumes" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.724968 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c2e869-61be-409d-ab69-60b0d6e87ca6" path="/var/lib/kubelet/pods/d9c2e869-61be-409d-ab69-60b0d6e87ca6/volumes" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.729730 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-config-data" (OuterVolumeSpecName: "config-data") pod "3a1db388-a224-4e6f-b54d-de7b0321a518" (UID: "3a1db388-a224-4e6f-b54d-de7b0321a518"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.781303 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.781342 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1db388-a224-4e6f-b54d-de7b0321a518-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.861422 4832 scope.go:117] "RemoveContainer" containerID="0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a" Dec 04 06:31:18 crc kubenswrapper[4832]: E1204 06:31:18.862228 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a\": container with ID starting with 0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a not found: ID does not exist" containerID="0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.862295 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a"} err="failed to get container status \"0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a\": rpc error: code = NotFound desc = could not find container \"0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a\": container with ID starting with 0a1da8a4ed2e835e6c6bdf938437b2c0bb122929445007df60886e6120fb016a not found: ID does not exist" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.862329 4832 scope.go:117] "RemoveContainer" containerID="3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3" Dec 04 06:31:18 crc kubenswrapper[4832]: E1204 06:31:18.862813 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3\": container with ID starting with 3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3 not found: ID does not exist" containerID="3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.862865 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3"} err="failed to get container status \"3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3\": rpc error: code = NotFound desc = could not find container \"3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3\": container with ID starting with 3ece162fb6ed688cc2c03585f0d8c878e3b89690988448e1bed2f95909be84e3 not found: ID does not exist" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.862890 4832 scope.go:117] "RemoveContainer" containerID="f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c" Dec 04 06:31:18 crc kubenswrapper[4832]: E1204 06:31:18.863723 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c\": container with ID starting with f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c not found: ID does not exist" containerID="f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.863748 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c"} err="failed to get container status \"f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c\": rpc error: code = NotFound desc = could not find container \"f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c\": container with ID starting with f539034883fa20047547389b34e2f63afeac8a28d15af96b99174a0b55a9f81c not found: ID does not exist" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.863765 4832 scope.go:117] "RemoveContainer" containerID="970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e" Dec 04 06:31:18 crc kubenswrapper[4832]: E1204 06:31:18.864610 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e\": container with ID starting with 970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e not found: ID does not exist" containerID="970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.864660 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e"} err="failed to get container status \"970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e\": rpc error: code = NotFound desc = could not find container \"970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e\": container with ID starting with 970a42c9ff46a796354ff5b25ef8341eb76d97b30286a8f52342e1f6309e9f7e not found: ID does not exist" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.902615 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.929677 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.964596 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:31:18 crc kubenswrapper[4832]: E1204 06:31:18.965078 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="proxy-httpd" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.965093 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="proxy-httpd" Dec 04 06:31:18 crc kubenswrapper[4832]: E1204 06:31:18.965106 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="sg-core" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.965112 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="sg-core" Dec 04 06:31:18 crc kubenswrapper[4832]: E1204 06:31:18.965146 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="ceilometer-central-agent" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.965153 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="ceilometer-central-agent" Dec 04 06:31:18 crc kubenswrapper[4832]: E1204 06:31:18.965169 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="ceilometer-notification-agent" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.965176 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="ceilometer-notification-agent" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.965372 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="sg-core" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.965406 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="ceilometer-central-agent" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.965415 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="ceilometer-notification-agent" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.965442 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" containerName="proxy-httpd" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.967443 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.972268 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.972569 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.972277 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.983800 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.989785 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.990906 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 04 06:31:18 crc kubenswrapper[4832]: I1204 06:31:18.992536 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.017355 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.087980 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-scripts\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.088043 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e992ecf9-158e-4ff4-8bef-4d9f4891240a-run-httpd\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.088080 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kb4m\" (UniqueName: \"kubernetes.io/projected/e992ecf9-158e-4ff4-8bef-4d9f4891240a-kube-api-access-2kb4m\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.088191 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e992ecf9-158e-4ff4-8bef-4d9f4891240a-log-httpd\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.088229 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-config-data\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.088298 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.088325 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.088572 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.190636 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.190695 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-scripts\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.190715 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e992ecf9-158e-4ff4-8bef-4d9f4891240a-run-httpd\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.190732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kb4m\" (UniqueName: \"kubernetes.io/projected/e992ecf9-158e-4ff4-8bef-4d9f4891240a-kube-api-access-2kb4m\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.190758 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e992ecf9-158e-4ff4-8bef-4d9f4891240a-log-httpd\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.190779 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-config-data\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.190827 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.190844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.191653 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e992ecf9-158e-4ff4-8bef-4d9f4891240a-run-httpd\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.192012 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e992ecf9-158e-4ff4-8bef-4d9f4891240a-log-httpd\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.195438 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.196609 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.197243 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.197759 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-config-data\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.200577 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-scripts\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.217735 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kb4m\" (UniqueName: \"kubernetes.io/projected/e992ecf9-158e-4ff4-8bef-4d9f4891240a-kube-api-access-2kb4m\") pod \"ceilometer-0\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.300034 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.578368 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fae96e75-8064-475a-9b3f-9b68932ff076","Type":"ContainerStarted","Data":"9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e"} Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.578785 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fae96e75-8064-475a-9b3f-9b68932ff076","Type":"ContainerStarted","Data":"273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a"} Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.578799 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fae96e75-8064-475a-9b3f-9b68932ff076","Type":"ContainerStarted","Data":"28ecce834b1b0e45aa6567aced4788bc54cefeed4dcce865f24fa9d28cf3c2c9"} Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.586167 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1c696d5d-0ba4-4406-aa1e-ef9d99df8136","Type":"ContainerStarted","Data":"9f2f1e55f1c8c794d6666c132cc02037396096dea5c9d63a9f58175994226b68"} Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.586382 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.593813 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.604707 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.604687452 podStartE2EDuration="2.604687452s" podCreationTimestamp="2025-12-04 06:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:31:19.595929834 +0000 UTC m=+1335.208747540" watchObservedRunningTime="2025-12-04 06:31:19.604687452 +0000 UTC m=+1335.217505158" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.645155 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.645119404 podStartE2EDuration="2.645119404s" podCreationTimestamp="2025-12-04 06:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:31:19.635006943 +0000 UTC m=+1335.247824649" watchObservedRunningTime="2025-12-04 06:31:19.645119404 +0000 UTC m=+1335.257937110" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.795262 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.812195 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xj58h"] Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.813913 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:19 crc kubenswrapper[4832]: I1204 06:31:19.827472 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xj58h"] Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.009444 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-config\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.009502 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qmt9\" (UniqueName: \"kubernetes.io/projected/59983b29-268f-440b-a57c-7d3584241778-kube-api-access-9qmt9\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.009533 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.009568 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.009632 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.009655 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.111439 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-config\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.111492 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qmt9\" (UniqueName: \"kubernetes.io/projected/59983b29-268f-440b-a57c-7d3584241778-kube-api-access-9qmt9\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.111521 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.111556 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.111601 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.111622 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.112677 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.112678 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.112826 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.112954 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.113253 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-config\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.133874 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qmt9\" (UniqueName: \"kubernetes.io/projected/59983b29-268f-440b-a57c-7d3584241778-kube-api-access-9qmt9\") pod \"dnsmasq-dns-89c5cd4d5-xj58h\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.188475 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.630330 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e992ecf9-158e-4ff4-8bef-4d9f4891240a","Type":"ContainerStarted","Data":"7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c"} Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.630686 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e992ecf9-158e-4ff4-8bef-4d9f4891240a","Type":"ContainerStarted","Data":"1f6fea605fe2d295f68cde7ff4e1cbe82d00dba5b7b0a0d18c556c29630b5b63"} Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.724037 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1db388-a224-4e6f-b54d-de7b0321a518" path="/var/lib/kubelet/pods/3a1db388-a224-4e6f-b54d-de7b0321a518/volumes" Dec 04 06:31:20 crc kubenswrapper[4832]: I1204 06:31:20.778007 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xj58h"] Dec 04 06:31:20 crc kubenswrapper[4832]: W1204 06:31:20.790528 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59983b29_268f_440b_a57c_7d3584241778.slice/crio-6675c14cd4d65ac05de6e3fe68091a859eec4eabc9d7bd8d0cd7f65ccc75659c WatchSource:0}: Error finding container 6675c14cd4d65ac05de6e3fe68091a859eec4eabc9d7bd8d0cd7f65ccc75659c: Status 404 returned error can't find the container with id 6675c14cd4d65ac05de6e3fe68091a859eec4eabc9d7bd8d0cd7f65ccc75659c Dec 04 06:31:21 crc kubenswrapper[4832]: I1204 06:31:21.641611 4832 generic.go:334] "Generic (PLEG): container finished" podID="59983b29-268f-440b-a57c-7d3584241778" containerID="225c551036dc11d12028717f3d3cb9460032192ff28bcd75f1da9ff5b6e8b714" exitCode=0 Dec 04 06:31:21 crc kubenswrapper[4832]: I1204 06:31:21.641739 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" event={"ID":"59983b29-268f-440b-a57c-7d3584241778","Type":"ContainerDied","Data":"225c551036dc11d12028717f3d3cb9460032192ff28bcd75f1da9ff5b6e8b714"} Dec 04 06:31:21 crc kubenswrapper[4832]: I1204 06:31:21.642274 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" event={"ID":"59983b29-268f-440b-a57c-7d3584241778","Type":"ContainerStarted","Data":"6675c14cd4d65ac05de6e3fe68091a859eec4eabc9d7bd8d0cd7f65ccc75659c"} Dec 04 06:31:21 crc kubenswrapper[4832]: I1204 06:31:21.650076 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e992ecf9-158e-4ff4-8bef-4d9f4891240a","Type":"ContainerStarted","Data":"d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30"} Dec 04 06:31:22 crc kubenswrapper[4832]: I1204 06:31:22.110006 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:31:22 crc kubenswrapper[4832]: I1204 06:31:22.662358 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" event={"ID":"59983b29-268f-440b-a57c-7d3584241778","Type":"ContainerStarted","Data":"912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463"} Dec 04 06:31:22 crc kubenswrapper[4832]: I1204 06:31:22.663194 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:22 crc kubenswrapper[4832]: I1204 06:31:22.664930 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e992ecf9-158e-4ff4-8bef-4d9f4891240a","Type":"ContainerStarted","Data":"2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b"} Dec 04 06:31:22 crc kubenswrapper[4832]: I1204 06:31:22.665089 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d93904fb-9acc-419d-9d7d-3e0effe0d457" containerName="nova-api-log" containerID="cri-o://afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266" gracePeriod=30 Dec 04 06:31:22 crc kubenswrapper[4832]: I1204 06:31:22.665238 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d93904fb-9acc-419d-9d7d-3e0effe0d457" containerName="nova-api-api" containerID="cri-o://ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16" gracePeriod=30 Dec 04 06:31:22 crc kubenswrapper[4832]: I1204 06:31:22.699634 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" podStartSLOduration=3.699610796 podStartE2EDuration="3.699610796s" podCreationTimestamp="2025-12-04 06:31:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:31:22.691061884 +0000 UTC m=+1338.303879590" watchObservedRunningTime="2025-12-04 06:31:22.699610796 +0000 UTC m=+1338.312428502" Dec 04 06:31:22 crc kubenswrapper[4832]: I1204 06:31:22.924018 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:22 crc kubenswrapper[4832]: I1204 06:31:22.959531 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 04 06:31:22 crc kubenswrapper[4832]: I1204 06:31:22.959589 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 04 06:31:23 crc kubenswrapper[4832]: I1204 06:31:23.157885 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:31:23 crc kubenswrapper[4832]: I1204 06:31:23.690866 4832 generic.go:334] "Generic (PLEG): container finished" podID="d93904fb-9acc-419d-9d7d-3e0effe0d457" containerID="afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266" exitCode=143 Dec 04 06:31:23 crc kubenswrapper[4832]: I1204 06:31:23.690944 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d93904fb-9acc-419d-9d7d-3e0effe0d457","Type":"ContainerDied","Data":"afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266"} Dec 04 06:31:24 crc kubenswrapper[4832]: I1204 06:31:24.703449 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="ceilometer-central-agent" containerID="cri-o://7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c" gracePeriod=30 Dec 04 06:31:24 crc kubenswrapper[4832]: I1204 06:31:24.703993 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e992ecf9-158e-4ff4-8bef-4d9f4891240a","Type":"ContainerStarted","Data":"83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14"} Dec 04 06:31:24 crc kubenswrapper[4832]: I1204 06:31:24.704031 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 06:31:24 crc kubenswrapper[4832]: I1204 06:31:24.704288 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="proxy-httpd" containerID="cri-o://83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14" gracePeriod=30 Dec 04 06:31:24 crc kubenswrapper[4832]: I1204 06:31:24.704331 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="sg-core" containerID="cri-o://2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b" gracePeriod=30 Dec 04 06:31:24 crc kubenswrapper[4832]: I1204 06:31:24.704365 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="ceilometer-notification-agent" containerID="cri-o://d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30" gracePeriod=30 Dec 04 06:31:24 crc kubenswrapper[4832]: I1204 06:31:24.728696 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.03948941 podStartE2EDuration="6.728674024s" podCreationTimestamp="2025-12-04 06:31:18 +0000 UTC" firstStartedPulling="2025-12-04 06:31:19.826924994 +0000 UTC m=+1335.439742700" lastFinishedPulling="2025-12-04 06:31:23.516109598 +0000 UTC m=+1339.128927314" observedRunningTime="2025-12-04 06:31:24.726254023 +0000 UTC m=+1340.339071749" watchObservedRunningTime="2025-12-04 06:31:24.728674024 +0000 UTC m=+1340.341491730" Dec 04 06:31:25 crc kubenswrapper[4832]: I1204 06:31:25.714919 4832 generic.go:334] "Generic (PLEG): container finished" podID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerID="83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14" exitCode=0 Dec 04 06:31:25 crc kubenswrapper[4832]: I1204 06:31:25.715231 4832 generic.go:334] "Generic (PLEG): container finished" podID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerID="2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b" exitCode=2 Dec 04 06:31:25 crc kubenswrapper[4832]: I1204 06:31:25.715241 4832 generic.go:334] "Generic (PLEG): container finished" podID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerID="d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30" exitCode=0 Dec 04 06:31:25 crc kubenswrapper[4832]: I1204 06:31:25.714972 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e992ecf9-158e-4ff4-8bef-4d9f4891240a","Type":"ContainerDied","Data":"83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14"} Dec 04 06:31:25 crc kubenswrapper[4832]: I1204 06:31:25.715275 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e992ecf9-158e-4ff4-8bef-4d9f4891240a","Type":"ContainerDied","Data":"2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b"} Dec 04 06:31:25 crc kubenswrapper[4832]: I1204 06:31:25.715288 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e992ecf9-158e-4ff4-8bef-4d9f4891240a","Type":"ContainerDied","Data":"d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30"} Dec 04 06:31:25 crc kubenswrapper[4832]: I1204 06:31:25.823755 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.331356 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.452702 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crsbm\" (UniqueName: \"kubernetes.io/projected/d93904fb-9acc-419d-9d7d-3e0effe0d457-kube-api-access-crsbm\") pod \"d93904fb-9acc-419d-9d7d-3e0effe0d457\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.453184 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93904fb-9acc-419d-9d7d-3e0effe0d457-combined-ca-bundle\") pod \"d93904fb-9acc-419d-9d7d-3e0effe0d457\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.453328 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d93904fb-9acc-419d-9d7d-3e0effe0d457-logs\") pod \"d93904fb-9acc-419d-9d7d-3e0effe0d457\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.453381 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93904fb-9acc-419d-9d7d-3e0effe0d457-config-data\") pod \"d93904fb-9acc-419d-9d7d-3e0effe0d457\" (UID: \"d93904fb-9acc-419d-9d7d-3e0effe0d457\") " Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.453819 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d93904fb-9acc-419d-9d7d-3e0effe0d457-logs" (OuterVolumeSpecName: "logs") pod "d93904fb-9acc-419d-9d7d-3e0effe0d457" (UID: "d93904fb-9acc-419d-9d7d-3e0effe0d457"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.465978 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93904fb-9acc-419d-9d7d-3e0effe0d457-kube-api-access-crsbm" (OuterVolumeSpecName: "kube-api-access-crsbm") pod "d93904fb-9acc-419d-9d7d-3e0effe0d457" (UID: "d93904fb-9acc-419d-9d7d-3e0effe0d457"). InnerVolumeSpecName "kube-api-access-crsbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.487794 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93904fb-9acc-419d-9d7d-3e0effe0d457-config-data" (OuterVolumeSpecName: "config-data") pod "d93904fb-9acc-419d-9d7d-3e0effe0d457" (UID: "d93904fb-9acc-419d-9d7d-3e0effe0d457"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.490976 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93904fb-9acc-419d-9d7d-3e0effe0d457-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d93904fb-9acc-419d-9d7d-3e0effe0d457" (UID: "d93904fb-9acc-419d-9d7d-3e0effe0d457"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.555254 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d93904fb-9acc-419d-9d7d-3e0effe0d457-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.555290 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93904fb-9acc-419d-9d7d-3e0effe0d457-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.555311 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crsbm\" (UniqueName: \"kubernetes.io/projected/d93904fb-9acc-419d-9d7d-3e0effe0d457-kube-api-access-crsbm\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.555327 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93904fb-9acc-419d-9d7d-3e0effe0d457-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.729172 4832 generic.go:334] "Generic (PLEG): container finished" podID="d93904fb-9acc-419d-9d7d-3e0effe0d457" containerID="ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16" exitCode=0 Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.729225 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d93904fb-9acc-419d-9d7d-3e0effe0d457","Type":"ContainerDied","Data":"ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16"} Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.729289 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d93904fb-9acc-419d-9d7d-3e0effe0d457","Type":"ContainerDied","Data":"8a087a380d1850d1c985c9d0dac023aba15d12ee78366307f6dc97daec21ef4c"} Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.729331 4832 scope.go:117] "RemoveContainer" containerID="ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.729764 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.768954 4832 scope.go:117] "RemoveContainer" containerID="afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.786531 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.794643 4832 scope.go:117] "RemoveContainer" containerID="ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16" Dec 04 06:31:26 crc kubenswrapper[4832]: E1204 06:31:26.798770 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16\": container with ID starting with ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16 not found: ID does not exist" containerID="ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.798817 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16"} err="failed to get container status \"ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16\": rpc error: code = NotFound desc = could not find container \"ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16\": container with ID starting with ccafc82fa4ba73e45cd8565c293ec592b8ce4f13361130d5500894effe756d16 not found: ID does not exist" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.798846 4832 scope.go:117] "RemoveContainer" containerID="afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266" Dec 04 06:31:26 crc kubenswrapper[4832]: E1204 06:31:26.799495 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266\": container with ID starting with afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266 not found: ID does not exist" containerID="afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.799518 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266"} err="failed to get container status \"afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266\": rpc error: code = NotFound desc = could not find container \"afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266\": container with ID starting with afd99c92569809ac5035512587339cbf81b65842516be496afd889d0a6c6a266 not found: ID does not exist" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.802883 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.818347 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 04 06:31:26 crc kubenswrapper[4832]: E1204 06:31:26.818712 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93904fb-9acc-419d-9d7d-3e0effe0d457" containerName="nova-api-log" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.818728 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93904fb-9acc-419d-9d7d-3e0effe0d457" containerName="nova-api-log" Dec 04 06:31:26 crc kubenswrapper[4832]: E1204 06:31:26.818742 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93904fb-9acc-419d-9d7d-3e0effe0d457" containerName="nova-api-api" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.818748 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93904fb-9acc-419d-9d7d-3e0effe0d457" containerName="nova-api-api" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.818950 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93904fb-9acc-419d-9d7d-3e0effe0d457" containerName="nova-api-api" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.818966 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93904fb-9acc-419d-9d7d-3e0effe0d457" containerName="nova-api-log" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.819958 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.823592 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.823925 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.824364 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.849306 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.965773 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.966256 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-config-data\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.966295 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.966378 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-public-tls-certs\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.966428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqzrz\" (UniqueName: \"kubernetes.io/projected/db157e88-5a3d-42de-9085-2a52cd33211a-kube-api-access-nqzrz\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:26 crc kubenswrapper[4832]: I1204 06:31:26.966456 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db157e88-5a3d-42de-9085-2a52cd33211a-logs\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.069246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.069351 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-config-data\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.069406 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.069641 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-public-tls-certs\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.069671 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqzrz\" (UniqueName: \"kubernetes.io/projected/db157e88-5a3d-42de-9085-2a52cd33211a-kube-api-access-nqzrz\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.069700 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db157e88-5a3d-42de-9085-2a52cd33211a-logs\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.070232 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db157e88-5a3d-42de-9085-2a52cd33211a-logs\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.075410 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.075909 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.076071 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-config-data\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.078813 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-public-tls-certs\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.089878 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqzrz\" (UniqueName: \"kubernetes.io/projected/db157e88-5a3d-42de-9085-2a52cd33211a-kube-api-access-nqzrz\") pod \"nova-api-0\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.142047 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:31:27 crc kubenswrapper[4832]: W1204 06:31:27.626113 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb157e88_5a3d_42de_9085_2a52cd33211a.slice/crio-ec20f0f3246d62dc4ebdb512d6282dd89a3ab83d3376631442cbe8fed6c927c0 WatchSource:0}: Error finding container ec20f0f3246d62dc4ebdb512d6282dd89a3ab83d3376631442cbe8fed6c927c0: Status 404 returned error can't find the container with id ec20f0f3246d62dc4ebdb512d6282dd89a3ab83d3376631442cbe8fed6c927c0 Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.629458 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.700030 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.766112 4832 generic.go:334] "Generic (PLEG): container finished" podID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerID="7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c" exitCode=0 Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.767660 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e992ecf9-158e-4ff4-8bef-4d9f4891240a","Type":"ContainerDied","Data":"7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c"} Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.767722 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e992ecf9-158e-4ff4-8bef-4d9f4891240a","Type":"ContainerDied","Data":"1f6fea605fe2d295f68cde7ff4e1cbe82d00dba5b7b0a0d18c556c29630b5b63"} Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.767772 4832 scope.go:117] "RemoveContainer" containerID="83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.768616 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.774169 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db157e88-5a3d-42de-9085-2a52cd33211a","Type":"ContainerStarted","Data":"ec20f0f3246d62dc4ebdb512d6282dd89a3ab83d3376631442cbe8fed6c927c0"} Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.791090 4832 scope.go:117] "RemoveContainer" containerID="2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.814422 4832 scope.go:117] "RemoveContainer" containerID="d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.835958 4832 scope.go:117] "RemoveContainer" containerID="7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.857194 4832 scope.go:117] "RemoveContainer" containerID="83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14" Dec 04 06:31:27 crc kubenswrapper[4832]: E1204 06:31:27.857740 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14\": container with ID starting with 83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14 not found: ID does not exist" containerID="83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.857780 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14"} err="failed to get container status \"83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14\": rpc error: code = NotFound desc = could not find container \"83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14\": container with ID starting with 83e457edba1190ae83083b6ddc193358f37d8f4905905c4a62df66ee59f77a14 not found: ID does not exist" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.857830 4832 scope.go:117] "RemoveContainer" containerID="2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b" Dec 04 06:31:27 crc kubenswrapper[4832]: E1204 06:31:27.858471 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b\": container with ID starting with 2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b not found: ID does not exist" containerID="2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.858498 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b"} err="failed to get container status \"2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b\": rpc error: code = NotFound desc = could not find container \"2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b\": container with ID starting with 2e04506c0ae3046279f0d2e7212da7bac77945c1ed7dc591f83f29a4f872958b not found: ID does not exist" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.858516 4832 scope.go:117] "RemoveContainer" containerID="d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30" Dec 04 06:31:27 crc kubenswrapper[4832]: E1204 06:31:27.858820 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30\": container with ID starting with d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30 not found: ID does not exist" containerID="d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.858853 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30"} err="failed to get container status \"d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30\": rpc error: code = NotFound desc = could not find container \"d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30\": container with ID starting with d8c9746edd42adc02ba144525314e327d5432e1c5f665cb4af427358ddcdda30 not found: ID does not exist" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.858892 4832 scope.go:117] "RemoveContainer" containerID="7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c" Dec 04 06:31:27 crc kubenswrapper[4832]: E1204 06:31:27.859275 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c\": container with ID starting with 7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c not found: ID does not exist" containerID="7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.859303 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c"} err="failed to get container status \"7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c\": rpc error: code = NotFound desc = could not find container \"7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c\": container with ID starting with 7124fdf3e1923f5b78b04cb4dde3c5600f9dae7c2572e25cb42c24aab9288f0c not found: ID does not exist" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.888666 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-ceilometer-tls-certs\") pod \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.888757 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kb4m\" (UniqueName: \"kubernetes.io/projected/e992ecf9-158e-4ff4-8bef-4d9f4891240a-kube-api-access-2kb4m\") pod \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.888836 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-combined-ca-bundle\") pod \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.888884 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-config-data\") pod \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.888953 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-sg-core-conf-yaml\") pod \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.889066 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e992ecf9-158e-4ff4-8bef-4d9f4891240a-log-httpd\") pod \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.889108 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e992ecf9-158e-4ff4-8bef-4d9f4891240a-run-httpd\") pod \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.889156 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-scripts\") pod \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\" (UID: \"e992ecf9-158e-4ff4-8bef-4d9f4891240a\") " Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.889783 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e992ecf9-158e-4ff4-8bef-4d9f4891240a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e992ecf9-158e-4ff4-8bef-4d9f4891240a" (UID: "e992ecf9-158e-4ff4-8bef-4d9f4891240a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.890080 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e992ecf9-158e-4ff4-8bef-4d9f4891240a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e992ecf9-158e-4ff4-8bef-4d9f4891240a" (UID: "e992ecf9-158e-4ff4-8bef-4d9f4891240a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.890710 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e992ecf9-158e-4ff4-8bef-4d9f4891240a-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.890732 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e992ecf9-158e-4ff4-8bef-4d9f4891240a-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.893536 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e992ecf9-158e-4ff4-8bef-4d9f4891240a-kube-api-access-2kb4m" (OuterVolumeSpecName: "kube-api-access-2kb4m") pod "e992ecf9-158e-4ff4-8bef-4d9f4891240a" (UID: "e992ecf9-158e-4ff4-8bef-4d9f4891240a"). InnerVolumeSpecName "kube-api-access-2kb4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.894015 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-scripts" (OuterVolumeSpecName: "scripts") pod "e992ecf9-158e-4ff4-8bef-4d9f4891240a" (UID: "e992ecf9-158e-4ff4-8bef-4d9f4891240a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.923935 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.927980 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e992ecf9-158e-4ff4-8bef-4d9f4891240a" (UID: "e992ecf9-158e-4ff4-8bef-4d9f4891240a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.941514 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.955277 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e992ecf9-158e-4ff4-8bef-4d9f4891240a" (UID: "e992ecf9-158e-4ff4-8bef-4d9f4891240a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.962106 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.962654 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.993464 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.993981 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kb4m\" (UniqueName: \"kubernetes.io/projected/e992ecf9-158e-4ff4-8bef-4d9f4891240a-kube-api-access-2kb4m\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.994008 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.994017 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:27 crc kubenswrapper[4832]: I1204 06:31:27.994051 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e992ecf9-158e-4ff4-8bef-4d9f4891240a" (UID: "e992ecf9-158e-4ff4-8bef-4d9f4891240a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.009379 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-config-data" (OuterVolumeSpecName: "config-data") pod "e992ecf9-158e-4ff4-8bef-4d9f4891240a" (UID: "e992ecf9-158e-4ff4-8bef-4d9f4891240a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.096408 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.096453 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e992ecf9-158e-4ff4-8bef-4d9f4891240a-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.196578 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.206929 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.221651 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:31:28 crc kubenswrapper[4832]: E1204 06:31:28.222073 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="sg-core" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.222091 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="sg-core" Dec 04 06:31:28 crc kubenswrapper[4832]: E1204 06:31:28.222108 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="ceilometer-central-agent" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.222114 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="ceilometer-central-agent" Dec 04 06:31:28 crc kubenswrapper[4832]: E1204 06:31:28.222127 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="ceilometer-notification-agent" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.222132 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="ceilometer-notification-agent" Dec 04 06:31:28 crc kubenswrapper[4832]: E1204 06:31:28.222148 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="proxy-httpd" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.222154 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="proxy-httpd" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.222326 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="ceilometer-notification-agent" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.222350 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="ceilometer-central-agent" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.222366 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="sg-core" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.222374 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" containerName="proxy-httpd" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.224096 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.230702 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.230744 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.230900 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.258080 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.299908 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.300005 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6h4l\" (UniqueName: \"kubernetes.io/projected/2201e018-55df-4295-b234-ae553e00f058-kube-api-access-r6h4l\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.300075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-config-data\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.300219 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.300315 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2201e018-55df-4295-b234-ae553e00f058-log-httpd\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.300331 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.300433 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-scripts\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.300754 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2201e018-55df-4295-b234-ae553e00f058-run-httpd\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.401809 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.402199 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6h4l\" (UniqueName: \"kubernetes.io/projected/2201e018-55df-4295-b234-ae553e00f058-kube-api-access-r6h4l\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.402227 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-config-data\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.402278 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.402320 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.402340 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2201e018-55df-4295-b234-ae553e00f058-log-httpd\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.402379 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-scripts\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.403219 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2201e018-55df-4295-b234-ae553e00f058-run-httpd\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.403705 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2201e018-55df-4295-b234-ae553e00f058-log-httpd\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.403810 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2201e018-55df-4295-b234-ae553e00f058-run-httpd\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.408413 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.410915 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-config-data\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.411342 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.423420 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-scripts\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.423427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2201e018-55df-4295-b234-ae553e00f058-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.435163 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6h4l\" (UniqueName: \"kubernetes.io/projected/2201e018-55df-4295-b234-ae553e00f058-kube-api-access-r6h4l\") pod \"ceilometer-0\" (UID: \"2201e018-55df-4295-b234-ae553e00f058\") " pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.549938 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.723404 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d93904fb-9acc-419d-9d7d-3e0effe0d457" path="/var/lib/kubelet/pods/d93904fb-9acc-419d-9d7d-3e0effe0d457/volumes" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.724366 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e992ecf9-158e-4ff4-8bef-4d9f4891240a" path="/var/lib/kubelet/pods/e992ecf9-158e-4ff4-8bef-4d9f4891240a/volumes" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.789763 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db157e88-5a3d-42de-9085-2a52cd33211a","Type":"ContainerStarted","Data":"1c4d9aec12aecf1a7e50ff3f7a8f857d123107b4a5912daa61fc148d647a7f17"} Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.789814 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db157e88-5a3d-42de-9085-2a52cd33211a","Type":"ContainerStarted","Data":"050da5c27fa1bceee84b02fbc31ea84f60c0f7b0f9f5c44a19c650932e5a4356"} Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.814145 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.814076605 podStartE2EDuration="2.814076605s" podCreationTimestamp="2025-12-04 06:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:31:28.811895452 +0000 UTC m=+1344.424713148" watchObservedRunningTime="2025-12-04 06:31:28.814076605 +0000 UTC m=+1344.426894321" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.819165 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.977626 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 06:31:28 crc kubenswrapper[4832]: I1204 06:31:28.977666 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.055932 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.068892 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xbxqr"] Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.070210 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.076632 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.076897 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.117481 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xbxqr"] Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.121361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-scripts\") pod \"nova-cell1-cell-mapping-xbxqr\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.121687 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4f2h\" (UniqueName: \"kubernetes.io/projected/b22a38d3-115d-4317-844d-65b82c8dea97-kube-api-access-t4f2h\") pod \"nova-cell1-cell-mapping-xbxqr\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.121841 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xbxqr\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.121923 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-config-data\") pod \"nova-cell1-cell-mapping-xbxqr\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.223689 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4f2h\" (UniqueName: \"kubernetes.io/projected/b22a38d3-115d-4317-844d-65b82c8dea97-kube-api-access-t4f2h\") pod \"nova-cell1-cell-mapping-xbxqr\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.223775 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xbxqr\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.223823 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-config-data\") pod \"nova-cell1-cell-mapping-xbxqr\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.223878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-scripts\") pod \"nova-cell1-cell-mapping-xbxqr\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.230032 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xbxqr\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.233570 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-config-data\") pod \"nova-cell1-cell-mapping-xbxqr\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.241648 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-scripts\") pod \"nova-cell1-cell-mapping-xbxqr\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.255316 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4f2h\" (UniqueName: \"kubernetes.io/projected/b22a38d3-115d-4317-844d-65b82c8dea97-kube-api-access-t4f2h\") pod \"nova-cell1-cell-mapping-xbxqr\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.465925 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.799117 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xbxqr"] Dec 04 06:31:29 crc kubenswrapper[4832]: I1204 06:31:29.803566 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2201e018-55df-4295-b234-ae553e00f058","Type":"ContainerStarted","Data":"b6e4afedc627a4b88848312b197e1235fd29b80caf1d3a4c88faa237fc420202"} Dec 04 06:31:29 crc kubenswrapper[4832]: W1204 06:31:29.812443 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb22a38d3_115d_4317_844d_65b82c8dea97.slice/crio-d6cde10830eaee47f8ba2e07100d12459b6be77a8b4b15646c1502b652dbab13 WatchSource:0}: Error finding container d6cde10830eaee47f8ba2e07100d12459b6be77a8b4b15646c1502b652dbab13: Status 404 returned error can't find the container with id d6cde10830eaee47f8ba2e07100d12459b6be77a8b4b15646c1502b652dbab13 Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.190485 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.277846 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-c9ptl"] Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.278505 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" podUID="a3ab00a2-637c-483b-a649-a7b692b54668" containerName="dnsmasq-dns" containerID="cri-o://f88ab60c0d3357807f4ca3054dc6bd5c1e2a542cb22a29df519a0b2e1c052acb" gracePeriod=10 Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.816545 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xbxqr" event={"ID":"b22a38d3-115d-4317-844d-65b82c8dea97","Type":"ContainerStarted","Data":"660f41e3e3b57c5933b93313c83781f543055c17d03cbc21f06f8d67de1e6553"} Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.816950 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xbxqr" event={"ID":"b22a38d3-115d-4317-844d-65b82c8dea97","Type":"ContainerStarted","Data":"d6cde10830eaee47f8ba2e07100d12459b6be77a8b4b15646c1502b652dbab13"} Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.827704 4832 generic.go:334] "Generic (PLEG): container finished" podID="a3ab00a2-637c-483b-a649-a7b692b54668" containerID="f88ab60c0d3357807f4ca3054dc6bd5c1e2a542cb22a29df519a0b2e1c052acb" exitCode=0 Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.827813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" event={"ID":"a3ab00a2-637c-483b-a649-a7b692b54668","Type":"ContainerDied","Data":"f88ab60c0d3357807f4ca3054dc6bd5c1e2a542cb22a29df519a0b2e1c052acb"} Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.827843 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" event={"ID":"a3ab00a2-637c-483b-a649-a7b692b54668","Type":"ContainerDied","Data":"5362db84cef4bd024f2ad79e075a7a43af5694dcc38fadb33de6d65e0206e6da"} Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.827856 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5362db84cef4bd024f2ad79e075a7a43af5694dcc38fadb33de6d65e0206e6da" Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.829835 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2201e018-55df-4295-b234-ae553e00f058","Type":"ContainerStarted","Data":"b9aaa0fab019f730f4dd9601572ada38b2d22b4efed772de8df87e28f494c5f5"} Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.829854 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2201e018-55df-4295-b234-ae553e00f058","Type":"ContainerStarted","Data":"7f751b7f6615e1468ea97051fe4bf199037df5e6e03407d09192d0d1596a5f1a"} Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.845187 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xbxqr" podStartSLOduration=1.845166064 podStartE2EDuration="1.845166064s" podCreationTimestamp="2025-12-04 06:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:31:30.832884889 +0000 UTC m=+1346.445702595" watchObservedRunningTime="2025-12-04 06:31:30.845166064 +0000 UTC m=+1346.457983770" Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.861342 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.962404 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-config\") pod \"a3ab00a2-637c-483b-a649-a7b692b54668\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.962537 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-dns-svc\") pod \"a3ab00a2-637c-483b-a649-a7b692b54668\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.962624 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-ovsdbserver-sb\") pod \"a3ab00a2-637c-483b-a649-a7b692b54668\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.962739 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgf6c\" (UniqueName: \"kubernetes.io/projected/a3ab00a2-637c-483b-a649-a7b692b54668-kube-api-access-xgf6c\") pod \"a3ab00a2-637c-483b-a649-a7b692b54668\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.962773 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-dns-swift-storage-0\") pod \"a3ab00a2-637c-483b-a649-a7b692b54668\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.962837 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-ovsdbserver-nb\") pod \"a3ab00a2-637c-483b-a649-a7b692b54668\" (UID: \"a3ab00a2-637c-483b-a649-a7b692b54668\") " Dec 04 06:31:30 crc kubenswrapper[4832]: I1204 06:31:30.974613 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ab00a2-637c-483b-a649-a7b692b54668-kube-api-access-xgf6c" (OuterVolumeSpecName: "kube-api-access-xgf6c") pod "a3ab00a2-637c-483b-a649-a7b692b54668" (UID: "a3ab00a2-637c-483b-a649-a7b692b54668"). InnerVolumeSpecName "kube-api-access-xgf6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.039546 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-config" (OuterVolumeSpecName: "config") pod "a3ab00a2-637c-483b-a649-a7b692b54668" (UID: "a3ab00a2-637c-483b-a649-a7b692b54668"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.050354 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3ab00a2-637c-483b-a649-a7b692b54668" (UID: "a3ab00a2-637c-483b-a649-a7b692b54668"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.058327 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3ab00a2-637c-483b-a649-a7b692b54668" (UID: "a3ab00a2-637c-483b-a649-a7b692b54668"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.065325 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.065358 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.065370 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgf6c\" (UniqueName: \"kubernetes.io/projected/a3ab00a2-637c-483b-a649-a7b692b54668-kube-api-access-xgf6c\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.065380 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.065571 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a3ab00a2-637c-483b-a649-a7b692b54668" (UID: "a3ab00a2-637c-483b-a649-a7b692b54668"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.084802 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3ab00a2-637c-483b-a649-a7b692b54668" (UID: "a3ab00a2-637c-483b-a649-a7b692b54668"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.168293 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.168795 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3ab00a2-637c-483b-a649-a7b692b54668-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.842036 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2201e018-55df-4295-b234-ae553e00f058","Type":"ContainerStarted","Data":"c1fca5642bacd522a6dd0ad5db6241e71acb546b4fa024e8c9333a2d8cf58b19"} Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.842189 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.883020 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-c9ptl"] Dec 04 06:31:31 crc kubenswrapper[4832]: I1204 06:31:31.892695 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-c9ptl"] Dec 04 06:31:32 crc kubenswrapper[4832]: I1204 06:31:32.724031 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ab00a2-637c-483b-a649-a7b692b54668" path="/var/lib/kubelet/pods/a3ab00a2-637c-483b-a649-a7b692b54668/volumes" Dec 04 06:31:32 crc kubenswrapper[4832]: I1204 06:31:32.858826 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2201e018-55df-4295-b234-ae553e00f058","Type":"ContainerStarted","Data":"c6b4187682c663cf28f4e4e563a56a9bcb8edc3dc4e7376a06b89320087be79f"} Dec 04 06:31:32 crc kubenswrapper[4832]: I1204 06:31:32.860550 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 06:31:32 crc kubenswrapper[4832]: I1204 06:31:32.885117 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.448202534 podStartE2EDuration="4.88507933s" podCreationTimestamp="2025-12-04 06:31:28 +0000 UTC" firstStartedPulling="2025-12-04 06:31:29.075744856 +0000 UTC m=+1344.688562562" lastFinishedPulling="2025-12-04 06:31:32.512621642 +0000 UTC m=+1348.125439358" observedRunningTime="2025-12-04 06:31:32.881171123 +0000 UTC m=+1348.493988839" watchObservedRunningTime="2025-12-04 06:31:32.88507933 +0000 UTC m=+1348.497897046" Dec 04 06:31:35 crc kubenswrapper[4832]: I1204 06:31:35.762075 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-c9ptl" podUID="a3ab00a2-637c-483b-a649-a7b692b54668" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: i/o timeout" Dec 04 06:31:35 crc kubenswrapper[4832]: I1204 06:31:35.893598 4832 generic.go:334] "Generic (PLEG): container finished" podID="b22a38d3-115d-4317-844d-65b82c8dea97" containerID="660f41e3e3b57c5933b93313c83781f543055c17d03cbc21f06f8d67de1e6553" exitCode=0 Dec 04 06:31:35 crc kubenswrapper[4832]: I1204 06:31:35.893715 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xbxqr" event={"ID":"b22a38d3-115d-4317-844d-65b82c8dea97","Type":"ContainerDied","Data":"660f41e3e3b57c5933b93313c83781f543055c17d03cbc21f06f8d67de1e6553"} Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.142793 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.143175 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.282635 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.461045 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-scripts\") pod \"b22a38d3-115d-4317-844d-65b82c8dea97\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.461167 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4f2h\" (UniqueName: \"kubernetes.io/projected/b22a38d3-115d-4317-844d-65b82c8dea97-kube-api-access-t4f2h\") pod \"b22a38d3-115d-4317-844d-65b82c8dea97\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.461350 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-config-data\") pod \"b22a38d3-115d-4317-844d-65b82c8dea97\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.461571 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-combined-ca-bundle\") pod \"b22a38d3-115d-4317-844d-65b82c8dea97\" (UID: \"b22a38d3-115d-4317-844d-65b82c8dea97\") " Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.468256 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b22a38d3-115d-4317-844d-65b82c8dea97-kube-api-access-t4f2h" (OuterVolumeSpecName: "kube-api-access-t4f2h") pod "b22a38d3-115d-4317-844d-65b82c8dea97" (UID: "b22a38d3-115d-4317-844d-65b82c8dea97"). InnerVolumeSpecName "kube-api-access-t4f2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.471260 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-scripts" (OuterVolumeSpecName: "scripts") pod "b22a38d3-115d-4317-844d-65b82c8dea97" (UID: "b22a38d3-115d-4317-844d-65b82c8dea97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.494927 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-config-data" (OuterVolumeSpecName: "config-data") pod "b22a38d3-115d-4317-844d-65b82c8dea97" (UID: "b22a38d3-115d-4317-844d-65b82c8dea97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.499350 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b22a38d3-115d-4317-844d-65b82c8dea97" (UID: "b22a38d3-115d-4317-844d-65b82c8dea97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.563819 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.563869 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.563887 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b22a38d3-115d-4317-844d-65b82c8dea97-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.563901 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4f2h\" (UniqueName: \"kubernetes.io/projected/b22a38d3-115d-4317-844d-65b82c8dea97-kube-api-access-t4f2h\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.939700 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xbxqr" event={"ID":"b22a38d3-115d-4317-844d-65b82c8dea97","Type":"ContainerDied","Data":"d6cde10830eaee47f8ba2e07100d12459b6be77a8b4b15646c1502b652dbab13"} Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.939766 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6cde10830eaee47f8ba2e07100d12459b6be77a8b4b15646c1502b652dbab13" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.939885 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xbxqr" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.969843 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.970059 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.976374 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 04 06:31:37 crc kubenswrapper[4832]: I1204 06:31:37.978264 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 04 06:31:38 crc kubenswrapper[4832]: I1204 06:31:38.156663 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="db157e88-5a3d-42de-9085-2a52cd33211a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 04 06:31:38 crc kubenswrapper[4832]: I1204 06:31:38.156901 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="db157e88-5a3d-42de-9085-2a52cd33211a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 06:31:38 crc kubenswrapper[4832]: I1204 06:31:38.188231 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:31:38 crc kubenswrapper[4832]: I1204 06:31:38.188527 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="db157e88-5a3d-42de-9085-2a52cd33211a" containerName="nova-api-log" containerID="cri-o://050da5c27fa1bceee84b02fbc31ea84f60c0f7b0f9f5c44a19c650932e5a4356" gracePeriod=30 Dec 04 06:31:38 crc kubenswrapper[4832]: I1204 06:31:38.188683 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="db157e88-5a3d-42de-9085-2a52cd33211a" containerName="nova-api-api" containerID="cri-o://1c4d9aec12aecf1a7e50ff3f7a8f857d123107b4a5912daa61fc148d647a7f17" gracePeriod=30 Dec 04 06:31:38 crc kubenswrapper[4832]: I1204 06:31:38.211213 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:31:38 crc kubenswrapper[4832]: I1204 06:31:38.211614 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ffea2af5-d5b2-4ac9-ba03-82606dd1cccf" containerName="nova-scheduler-scheduler" containerID="cri-o://a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909" gracePeriod=30 Dec 04 06:31:38 crc kubenswrapper[4832]: I1204 06:31:38.230333 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:31:38 crc kubenswrapper[4832]: E1204 06:31:38.771649 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 04 06:31:38 crc kubenswrapper[4832]: E1204 06:31:38.773949 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 04 06:31:38 crc kubenswrapper[4832]: E1204 06:31:38.778644 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 04 06:31:38 crc kubenswrapper[4832]: E1204 06:31:38.778709 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ffea2af5-d5b2-4ac9-ba03-82606dd1cccf" containerName="nova-scheduler-scheduler" Dec 04 06:31:38 crc kubenswrapper[4832]: I1204 06:31:38.953637 4832 generic.go:334] "Generic (PLEG): container finished" podID="db157e88-5a3d-42de-9085-2a52cd33211a" containerID="050da5c27fa1bceee84b02fbc31ea84f60c0f7b0f9f5c44a19c650932e5a4356" exitCode=143 Dec 04 06:31:38 crc kubenswrapper[4832]: I1204 06:31:38.954553 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db157e88-5a3d-42de-9085-2a52cd33211a","Type":"ContainerDied","Data":"050da5c27fa1bceee84b02fbc31ea84f60c0f7b0f9f5c44a19c650932e5a4356"} Dec 04 06:31:39 crc kubenswrapper[4832]: I1204 06:31:39.963026 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" containerName="nova-metadata-log" containerID="cri-o://273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a" gracePeriod=30 Dec 04 06:31:39 crc kubenswrapper[4832]: I1204 06:31:39.963200 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" containerName="nova-metadata-metadata" containerID="cri-o://9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e" gracePeriod=30 Dec 04 06:31:40 crc kubenswrapper[4832]: I1204 06:31:40.976749 4832 generic.go:334] "Generic (PLEG): container finished" podID="fae96e75-8064-475a-9b3f-9b68932ff076" containerID="273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a" exitCode=143 Dec 04 06:31:40 crc kubenswrapper[4832]: I1204 06:31:40.976856 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fae96e75-8064-475a-9b3f-9b68932ff076","Type":"ContainerDied","Data":"273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a"} Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.126436 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:59898->10.217.0.196:8775: read: connection reset by peer" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.126691 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:59884->10.217.0.196:8775: read: connection reset by peer" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.700058 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.704979 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.802758 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5brrm\" (UniqueName: \"kubernetes.io/projected/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-kube-api-access-5brrm\") pod \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\" (UID: \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\") " Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.802860 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-combined-ca-bundle\") pod \"fae96e75-8064-475a-9b3f-9b68932ff076\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.802921 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgpjj\" (UniqueName: \"kubernetes.io/projected/fae96e75-8064-475a-9b3f-9b68932ff076-kube-api-access-qgpjj\") pod \"fae96e75-8064-475a-9b3f-9b68932ff076\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.803466 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-nova-metadata-tls-certs\") pod \"fae96e75-8064-475a-9b3f-9b68932ff076\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.803518 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-config-data\") pod \"fae96e75-8064-475a-9b3f-9b68932ff076\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.803580 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-combined-ca-bundle\") pod \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\" (UID: \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\") " Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.803633 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-config-data\") pod \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\" (UID: \"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf\") " Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.812283 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-kube-api-access-5brrm" (OuterVolumeSpecName: "kube-api-access-5brrm") pod "ffea2af5-d5b2-4ac9-ba03-82606dd1cccf" (UID: "ffea2af5-d5b2-4ac9-ba03-82606dd1cccf"). InnerVolumeSpecName "kube-api-access-5brrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.814690 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae96e75-8064-475a-9b3f-9b68932ff076-kube-api-access-qgpjj" (OuterVolumeSpecName: "kube-api-access-qgpjj") pod "fae96e75-8064-475a-9b3f-9b68932ff076" (UID: "fae96e75-8064-475a-9b3f-9b68932ff076"). InnerVolumeSpecName "kube-api-access-qgpjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.852622 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffea2af5-d5b2-4ac9-ba03-82606dd1cccf" (UID: "ffea2af5-d5b2-4ac9-ba03-82606dd1cccf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.870043 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-config-data" (OuterVolumeSpecName: "config-data") pod "fae96e75-8064-475a-9b3f-9b68932ff076" (UID: "fae96e75-8064-475a-9b3f-9b68932ff076"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.870153 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fae96e75-8064-475a-9b3f-9b68932ff076" (UID: "fae96e75-8064-475a-9b3f-9b68932ff076"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.875312 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fae96e75-8064-475a-9b3f-9b68932ff076" (UID: "fae96e75-8064-475a-9b3f-9b68932ff076"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.887667 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-config-data" (OuterVolumeSpecName: "config-data") pod "ffea2af5-d5b2-4ac9-ba03-82606dd1cccf" (UID: "ffea2af5-d5b2-4ac9-ba03-82606dd1cccf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.905483 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fae96e75-8064-475a-9b3f-9b68932ff076-logs\") pod \"fae96e75-8064-475a-9b3f-9b68932ff076\" (UID: \"fae96e75-8064-475a-9b3f-9b68932ff076\") " Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.906024 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fae96e75-8064-475a-9b3f-9b68932ff076-logs" (OuterVolumeSpecName: "logs") pod "fae96e75-8064-475a-9b3f-9b68932ff076" (UID: "fae96e75-8064-475a-9b3f-9b68932ff076"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.906701 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.906723 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.906738 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.906749 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.906760 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5brrm\" (UniqueName: \"kubernetes.io/projected/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf-kube-api-access-5brrm\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.906769 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fae96e75-8064-475a-9b3f-9b68932ff076-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.906778 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgpjj\" (UniqueName: \"kubernetes.io/projected/fae96e75-8064-475a-9b3f-9b68932ff076-kube-api-access-qgpjj\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:43 crc kubenswrapper[4832]: I1204 06:31:43.906805 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fae96e75-8064-475a-9b3f-9b68932ff076-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.030681 4832 generic.go:334] "Generic (PLEG): container finished" podID="fae96e75-8064-475a-9b3f-9b68932ff076" containerID="9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e" exitCode=0 Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.030869 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.031601 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fae96e75-8064-475a-9b3f-9b68932ff076","Type":"ContainerDied","Data":"9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e"} Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.031697 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fae96e75-8064-475a-9b3f-9b68932ff076","Type":"ContainerDied","Data":"28ecce834b1b0e45aa6567aced4788bc54cefeed4dcce865f24fa9d28cf3c2c9"} Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.031728 4832 scope.go:117] "RemoveContainer" containerID="9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.036232 4832 generic.go:334] "Generic (PLEG): container finished" podID="db157e88-5a3d-42de-9085-2a52cd33211a" containerID="1c4d9aec12aecf1a7e50ff3f7a8f857d123107b4a5912daa61fc148d647a7f17" exitCode=0 Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.036375 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db157e88-5a3d-42de-9085-2a52cd33211a","Type":"ContainerDied","Data":"1c4d9aec12aecf1a7e50ff3f7a8f857d123107b4a5912daa61fc148d647a7f17"} Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.038543 4832 generic.go:334] "Generic (PLEG): container finished" podID="ffea2af5-d5b2-4ac9-ba03-82606dd1cccf" containerID="a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909" exitCode=0 Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.038592 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf","Type":"ContainerDied","Data":"a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909"} Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.038619 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ffea2af5-d5b2-4ac9-ba03-82606dd1cccf","Type":"ContainerDied","Data":"544e9e6ebf50657cfcab76a468dabebe213f370180138f3846f4534abbb8609b"} Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.038688 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.074745 4832 scope.go:117] "RemoveContainer" containerID="273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.089473 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.099260 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.109681 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.123209 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:31:44 crc kubenswrapper[4832]: E1204 06:31:44.123942 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ab00a2-637c-483b-a649-a7b692b54668" containerName="init" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.123964 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ab00a2-637c-483b-a649-a7b692b54668" containerName="init" Dec 04 06:31:44 crc kubenswrapper[4832]: E1204 06:31:44.123987 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffea2af5-d5b2-4ac9-ba03-82606dd1cccf" containerName="nova-scheduler-scheduler" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.123995 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffea2af5-d5b2-4ac9-ba03-82606dd1cccf" containerName="nova-scheduler-scheduler" Dec 04 06:31:44 crc kubenswrapper[4832]: E1204 06:31:44.124014 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" containerName="nova-metadata-metadata" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.124022 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" containerName="nova-metadata-metadata" Dec 04 06:31:44 crc kubenswrapper[4832]: E1204 06:31:44.124042 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ab00a2-637c-483b-a649-a7b692b54668" containerName="dnsmasq-dns" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.124048 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ab00a2-637c-483b-a649-a7b692b54668" containerName="dnsmasq-dns" Dec 04 06:31:44 crc kubenswrapper[4832]: E1204 06:31:44.124066 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b22a38d3-115d-4317-844d-65b82c8dea97" containerName="nova-manage" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.124076 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22a38d3-115d-4317-844d-65b82c8dea97" containerName="nova-manage" Dec 04 06:31:44 crc kubenswrapper[4832]: E1204 06:31:44.124088 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" containerName="nova-metadata-log" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.124095 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" containerName="nova-metadata-log" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.124290 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b22a38d3-115d-4317-844d-65b82c8dea97" containerName="nova-manage" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.124313 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffea2af5-d5b2-4ac9-ba03-82606dd1cccf" containerName="nova-scheduler-scheduler" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.124325 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" containerName="nova-metadata-metadata" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.124336 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ab00a2-637c-483b-a649-a7b692b54668" containerName="dnsmasq-dns" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.124346 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" containerName="nova-metadata-log" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.125643 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.131661 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.138660 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.138787 4832 scope.go:117] "RemoveContainer" containerID="9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e" Dec 04 06:31:44 crc kubenswrapper[4832]: E1204 06:31:44.139696 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e\": container with ID starting with 9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e not found: ID does not exist" containerID="9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.139795 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e"} err="failed to get container status \"9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e\": rpc error: code = NotFound desc = could not find container \"9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e\": container with ID starting with 9f139c6523240241f5f8a0780b71d95775aa599f030d2b6f9e7be85bc508c08e not found: ID does not exist" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.139842 4832 scope.go:117] "RemoveContainer" containerID="273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a" Dec 04 06:31:44 crc kubenswrapper[4832]: E1204 06:31:44.143875 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a\": container with ID starting with 273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a not found: ID does not exist" containerID="273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.143937 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a"} err="failed to get container status \"273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a\": rpc error: code = NotFound desc = could not find container \"273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a\": container with ID starting with 273e529149cb37cd7f112122e1c2884f1872c0f6ce54b4c3226594ca5304a84a not found: ID does not exist" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.143986 4832 scope.go:117] "RemoveContainer" containerID="a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.151338 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.179230 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.190525 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.192452 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.195923 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.204885 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.304523 4832 scope.go:117] "RemoveContainer" containerID="a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909" Dec 04 06:31:44 crc kubenswrapper[4832]: E1204 06:31:44.306836 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909\": container with ID starting with a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909 not found: ID does not exist" containerID="a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.306889 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909"} err="failed to get container status \"a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909\": rpc error: code = NotFound desc = could not find container \"a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909\": container with ID starting with a71ae4f990c25e0628ac5fad14cf599baaeb80a75073e4dbe57e9b66ba620909 not found: ID does not exist" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.308148 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.315844 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e159726-cef9-46df-b183-6b0b2f5b013e-config-data\") pod \"nova-scheduler-0\" (UID: \"4e159726-cef9-46df-b183-6b0b2f5b013e\") " pod="openstack/nova-scheduler-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.315892 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-config-data\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.315920 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxtw8\" (UniqueName: \"kubernetes.io/projected/4e159726-cef9-46df-b183-6b0b2f5b013e-kube-api-access-nxtw8\") pod \"nova-scheduler-0\" (UID: \"4e159726-cef9-46df-b183-6b0b2f5b013e\") " pod="openstack/nova-scheduler-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.316297 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-logs\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.316386 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.316570 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.316738 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrxtz\" (UniqueName: \"kubernetes.io/projected/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-kube-api-access-mrxtz\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.317544 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e159726-cef9-46df-b183-6b0b2f5b013e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e159726-cef9-46df-b183-6b0b2f5b013e\") " pod="openstack/nova-scheduler-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.418776 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db157e88-5a3d-42de-9085-2a52cd33211a-logs\") pod \"db157e88-5a3d-42de-9085-2a52cd33211a\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.418846 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-public-tls-certs\") pod \"db157e88-5a3d-42de-9085-2a52cd33211a\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.418910 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-config-data\") pod \"db157e88-5a3d-42de-9085-2a52cd33211a\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.419087 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqzrz\" (UniqueName: \"kubernetes.io/projected/db157e88-5a3d-42de-9085-2a52cd33211a-kube-api-access-nqzrz\") pod \"db157e88-5a3d-42de-9085-2a52cd33211a\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.419120 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-internal-tls-certs\") pod \"db157e88-5a3d-42de-9085-2a52cd33211a\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.419225 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-combined-ca-bundle\") pod \"db157e88-5a3d-42de-9085-2a52cd33211a\" (UID: \"db157e88-5a3d-42de-9085-2a52cd33211a\") " Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.419547 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db157e88-5a3d-42de-9085-2a52cd33211a-logs" (OuterVolumeSpecName: "logs") pod "db157e88-5a3d-42de-9085-2a52cd33211a" (UID: "db157e88-5a3d-42de-9085-2a52cd33211a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.419641 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e159726-cef9-46df-b183-6b0b2f5b013e-config-data\") pod \"nova-scheduler-0\" (UID: \"4e159726-cef9-46df-b183-6b0b2f5b013e\") " pod="openstack/nova-scheduler-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.419699 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-config-data\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.419733 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxtw8\" (UniqueName: \"kubernetes.io/projected/4e159726-cef9-46df-b183-6b0b2f5b013e-kube-api-access-nxtw8\") pod \"nova-scheduler-0\" (UID: \"4e159726-cef9-46df-b183-6b0b2f5b013e\") " pod="openstack/nova-scheduler-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.419832 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-logs\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.419903 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.419995 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.420076 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrxtz\" (UniqueName: \"kubernetes.io/projected/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-kube-api-access-mrxtz\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.420153 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e159726-cef9-46df-b183-6b0b2f5b013e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e159726-cef9-46df-b183-6b0b2f5b013e\") " pod="openstack/nova-scheduler-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.420246 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db157e88-5a3d-42de-9085-2a52cd33211a-logs\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.420522 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-logs\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.425714 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.426427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-config-data\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.427791 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.428789 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db157e88-5a3d-42de-9085-2a52cd33211a-kube-api-access-nqzrz" (OuterVolumeSpecName: "kube-api-access-nqzrz") pod "db157e88-5a3d-42de-9085-2a52cd33211a" (UID: "db157e88-5a3d-42de-9085-2a52cd33211a"). InnerVolumeSpecName "kube-api-access-nqzrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.434915 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e159726-cef9-46df-b183-6b0b2f5b013e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e159726-cef9-46df-b183-6b0b2f5b013e\") " pod="openstack/nova-scheduler-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.435296 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e159726-cef9-46df-b183-6b0b2f5b013e-config-data\") pod \"nova-scheduler-0\" (UID: \"4e159726-cef9-46df-b183-6b0b2f5b013e\") " pod="openstack/nova-scheduler-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.437525 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxtw8\" (UniqueName: \"kubernetes.io/projected/4e159726-cef9-46df-b183-6b0b2f5b013e-kube-api-access-nxtw8\") pod \"nova-scheduler-0\" (UID: \"4e159726-cef9-46df-b183-6b0b2f5b013e\") " pod="openstack/nova-scheduler-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.439662 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrxtz\" (UniqueName: \"kubernetes.io/projected/2f30e5fb-d9e0-4048-9e6e-3559465be9d4-kube-api-access-mrxtz\") pod \"nova-metadata-0\" (UID: \"2f30e5fb-d9e0-4048-9e6e-3559465be9d4\") " pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.459621 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-config-data" (OuterVolumeSpecName: "config-data") pod "db157e88-5a3d-42de-9085-2a52cd33211a" (UID: "db157e88-5a3d-42de-9085-2a52cd33211a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.468593 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db157e88-5a3d-42de-9085-2a52cd33211a" (UID: "db157e88-5a3d-42de-9085-2a52cd33211a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.493145 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "db157e88-5a3d-42de-9085-2a52cd33211a" (UID: "db157e88-5a3d-42de-9085-2a52cd33211a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.498968 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "db157e88-5a3d-42de-9085-2a52cd33211a" (UID: "db157e88-5a3d-42de-9085-2a52cd33211a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.523492 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqzrz\" (UniqueName: \"kubernetes.io/projected/db157e88-5a3d-42de-9085-2a52cd33211a-kube-api-access-nqzrz\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.523543 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.523555 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.523566 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.523578 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db157e88-5a3d-42de-9085-2a52cd33211a-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.573888 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.613307 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.738106 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fae96e75-8064-475a-9b3f-9b68932ff076" path="/var/lib/kubelet/pods/fae96e75-8064-475a-9b3f-9b68932ff076/volumes" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.739644 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffea2af5-d5b2-4ac9-ba03-82606dd1cccf" path="/var/lib/kubelet/pods/ffea2af5-d5b2-4ac9-ba03-82606dd1cccf/volumes" Dec 04 06:31:44 crc kubenswrapper[4832]: I1204 06:31:44.882921 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.052379 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f30e5fb-d9e0-4048-9e6e-3559465be9d4","Type":"ContainerStarted","Data":"ab3cb1aa3910c91ed2eecf423cda970f18a3a780b5a29eec3b874808815f5b04"} Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.056974 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db157e88-5a3d-42de-9085-2a52cd33211a","Type":"ContainerDied","Data":"ec20f0f3246d62dc4ebdb512d6282dd89a3ab83d3376631442cbe8fed6c927c0"} Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.057092 4832 scope.go:117] "RemoveContainer" containerID="1c4d9aec12aecf1a7e50ff3f7a8f857d123107b4a5912daa61fc148d647a7f17" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.057017 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.089146 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.095206 4832 scope.go:117] "RemoveContainer" containerID="050da5c27fa1bceee84b02fbc31ea84f60c0f7b0f9f5c44a19c650932e5a4356" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.103755 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.156567 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 04 06:31:45 crc kubenswrapper[4832]: E1204 06:31:45.157231 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db157e88-5a3d-42de-9085-2a52cd33211a" containerName="nova-api-api" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.157250 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="db157e88-5a3d-42de-9085-2a52cd33211a" containerName="nova-api-api" Dec 04 06:31:45 crc kubenswrapper[4832]: E1204 06:31:45.157269 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db157e88-5a3d-42de-9085-2a52cd33211a" containerName="nova-api-log" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.157275 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="db157e88-5a3d-42de-9085-2a52cd33211a" containerName="nova-api-log" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.157501 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="db157e88-5a3d-42de-9085-2a52cd33211a" containerName="nova-api-api" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.157529 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="db157e88-5a3d-42de-9085-2a52cd33211a" containerName="nova-api-log" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.158731 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.161698 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.161923 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.162134 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.169624 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.182567 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 06:31:45 crc kubenswrapper[4832]: W1204 06:31:45.210626 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e159726_cef9_46df_b183_6b0b2f5b013e.slice/crio-da8e47b34db4f2fe2d06a9916399898dc33f80761ffffffd381eba32effb4fb9 WatchSource:0}: Error finding container da8e47b34db4f2fe2d06a9916399898dc33f80761ffffffd381eba32effb4fb9: Status 404 returned error can't find the container with id da8e47b34db4f2fe2d06a9916399898dc33f80761ffffffd381eba32effb4fb9 Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.347071 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-logs\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.347154 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-public-tls-certs\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.347190 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqgh7\" (UniqueName: \"kubernetes.io/projected/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-kube-api-access-gqgh7\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.347250 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.347537 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-config-data\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.347679 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.450237 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.450427 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-logs\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.450465 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-public-tls-certs\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.450499 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqgh7\" (UniqueName: \"kubernetes.io/projected/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-kube-api-access-gqgh7\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.450540 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.450612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-config-data\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.451108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-logs\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.456716 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-config-data\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.456738 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.460545 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-public-tls-certs\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.460608 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.468575 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqgh7\" (UniqueName: \"kubernetes.io/projected/ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4-kube-api-access-gqgh7\") pod \"nova-api-0\" (UID: \"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4\") " pod="openstack/nova-api-0" Dec 04 06:31:45 crc kubenswrapper[4832]: I1204 06:31:45.655070 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 06:31:46 crc kubenswrapper[4832]: I1204 06:31:46.076823 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f30e5fb-d9e0-4048-9e6e-3559465be9d4","Type":"ContainerStarted","Data":"dfa4aa765aa325e1ed98800ab41dc30fb0deb7b2a395013aaa29fb75ada3c50b"} Dec 04 06:31:46 crc kubenswrapper[4832]: I1204 06:31:46.077305 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f30e5fb-d9e0-4048-9e6e-3559465be9d4","Type":"ContainerStarted","Data":"1402a8e54d266418cff5d7f463cc3fd0f4a2b842bb24fbfc5f78042cb63a4db6"} Dec 04 06:31:46 crc kubenswrapper[4832]: I1204 06:31:46.082344 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e159726-cef9-46df-b183-6b0b2f5b013e","Type":"ContainerStarted","Data":"20dbd2e132a56a2d2cc0acbf9a9fe842438604f8fd0c6a2786a5b26796e3b617"} Dec 04 06:31:46 crc kubenswrapper[4832]: I1204 06:31:46.082431 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e159726-cef9-46df-b183-6b0b2f5b013e","Type":"ContainerStarted","Data":"da8e47b34db4f2fe2d06a9916399898dc33f80761ffffffd381eba32effb4fb9"} Dec 04 06:31:46 crc kubenswrapper[4832]: I1204 06:31:46.132945 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.132920407 podStartE2EDuration="2.132920407s" podCreationTimestamp="2025-12-04 06:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:31:46.106703968 +0000 UTC m=+1361.719521694" watchObservedRunningTime="2025-12-04 06:31:46.132920407 +0000 UTC m=+1361.745738113" Dec 04 06:31:46 crc kubenswrapper[4832]: I1204 06:31:46.154777 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 06:31:46 crc kubenswrapper[4832]: I1204 06:31:46.159845 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.159828905 podStartE2EDuration="2.159828905s" podCreationTimestamp="2025-12-04 06:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:31:46.127040881 +0000 UTC m=+1361.739858597" watchObservedRunningTime="2025-12-04 06:31:46.159828905 +0000 UTC m=+1361.772646611" Dec 04 06:31:46 crc kubenswrapper[4832]: I1204 06:31:46.726537 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db157e88-5a3d-42de-9085-2a52cd33211a" path="/var/lib/kubelet/pods/db157e88-5a3d-42de-9085-2a52cd33211a/volumes" Dec 04 06:31:47 crc kubenswrapper[4832]: I1204 06:31:47.100340 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4","Type":"ContainerStarted","Data":"1c32749d754faff1029ac646a8bbaa416ad5a5b7fb8b3287f69a9be16ea83888"} Dec 04 06:31:47 crc kubenswrapper[4832]: I1204 06:31:47.100816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4","Type":"ContainerStarted","Data":"90481b1085db9064f88451c99ba03fae31301a5fea18599d38c531fa43602af4"} Dec 04 06:31:47 crc kubenswrapper[4832]: I1204 06:31:47.100837 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4","Type":"ContainerStarted","Data":"bab3b16c902de6227fb5d41ca3389b3953927b88ac841b47bbb9ed474383b662"} Dec 04 06:31:47 crc kubenswrapper[4832]: I1204 06:31:47.149794 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.149755891 podStartE2EDuration="2.149755891s" podCreationTimestamp="2025-12-04 06:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:31:47.129637521 +0000 UTC m=+1362.742455267" watchObservedRunningTime="2025-12-04 06:31:47.149755891 +0000 UTC m=+1362.762573607" Dec 04 06:31:49 crc kubenswrapper[4832]: I1204 06:31:49.574630 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 04 06:31:49 crc kubenswrapper[4832]: I1204 06:31:49.575139 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 04 06:31:49 crc kubenswrapper[4832]: I1204 06:31:49.613993 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 04 06:31:54 crc kubenswrapper[4832]: I1204 06:31:54.574693 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 04 06:31:54 crc kubenswrapper[4832]: I1204 06:31:54.575101 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 04 06:31:54 crc kubenswrapper[4832]: I1204 06:31:54.614005 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 04 06:31:54 crc kubenswrapper[4832]: I1204 06:31:54.655459 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 04 06:31:55 crc kubenswrapper[4832]: I1204 06:31:55.234468 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 04 06:31:55 crc kubenswrapper[4832]: I1204 06:31:55.593624 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2f30e5fb-d9e0-4048-9e6e-3559465be9d4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 06:31:55 crc kubenswrapper[4832]: I1204 06:31:55.593637 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2f30e5fb-d9e0-4048-9e6e-3559465be9d4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 06:31:55 crc kubenswrapper[4832]: I1204 06:31:55.655680 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 06:31:55 crc kubenswrapper[4832]: I1204 06:31:55.655747 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 06:31:56 crc kubenswrapper[4832]: I1204 06:31:56.667638 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 06:31:56 crc kubenswrapper[4832]: I1204 06:31:56.667645 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 06:31:58 crc kubenswrapper[4832]: I1204 06:31:58.561229 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 04 06:32:04 crc kubenswrapper[4832]: I1204 06:32:04.579803 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 04 06:32:04 crc kubenswrapper[4832]: I1204 06:32:04.583171 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 04 06:32:04 crc kubenswrapper[4832]: I1204 06:32:04.587816 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 04 06:32:05 crc kubenswrapper[4832]: I1204 06:32:05.322933 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 04 06:32:05 crc kubenswrapper[4832]: I1204 06:32:05.665476 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 04 06:32:05 crc kubenswrapper[4832]: I1204 06:32:05.666112 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 04 06:32:05 crc kubenswrapper[4832]: I1204 06:32:05.668279 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 04 06:32:05 crc kubenswrapper[4832]: I1204 06:32:05.678462 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 04 06:32:06 crc kubenswrapper[4832]: I1204 06:32:06.324749 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 04 06:32:06 crc kubenswrapper[4832]: I1204 06:32:06.332289 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.063799 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d96r7"] Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.068659 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.080152 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d96r7"] Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.186260 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/034790c8-6017-4376-bfd7-f8df06245d40-catalog-content\") pod \"redhat-operators-d96r7\" (UID: \"034790c8-6017-4376-bfd7-f8df06245d40\") " pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.186438 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/034790c8-6017-4376-bfd7-f8df06245d40-utilities\") pod \"redhat-operators-d96r7\" (UID: \"034790c8-6017-4376-bfd7-f8df06245d40\") " pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.186546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2f89\" (UniqueName: \"kubernetes.io/projected/034790c8-6017-4376-bfd7-f8df06245d40-kube-api-access-n2f89\") pod \"redhat-operators-d96r7\" (UID: \"034790c8-6017-4376-bfd7-f8df06245d40\") " pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.288335 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2f89\" (UniqueName: \"kubernetes.io/projected/034790c8-6017-4376-bfd7-f8df06245d40-kube-api-access-n2f89\") pod \"redhat-operators-d96r7\" (UID: \"034790c8-6017-4376-bfd7-f8df06245d40\") " pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.288448 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/034790c8-6017-4376-bfd7-f8df06245d40-catalog-content\") pod \"redhat-operators-d96r7\" (UID: \"034790c8-6017-4376-bfd7-f8df06245d40\") " pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.288557 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/034790c8-6017-4376-bfd7-f8df06245d40-utilities\") pod \"redhat-operators-d96r7\" (UID: \"034790c8-6017-4376-bfd7-f8df06245d40\") " pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.289095 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/034790c8-6017-4376-bfd7-f8df06245d40-utilities\") pod \"redhat-operators-d96r7\" (UID: \"034790c8-6017-4376-bfd7-f8df06245d40\") " pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.289123 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/034790c8-6017-4376-bfd7-f8df06245d40-catalog-content\") pod \"redhat-operators-d96r7\" (UID: \"034790c8-6017-4376-bfd7-f8df06245d40\") " pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.309367 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2f89\" (UniqueName: \"kubernetes.io/projected/034790c8-6017-4376-bfd7-f8df06245d40-kube-api-access-n2f89\") pod \"redhat-operators-d96r7\" (UID: \"034790c8-6017-4376-bfd7-f8df06245d40\") " pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.389160 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:12 crc kubenswrapper[4832]: I1204 06:32:12.896921 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d96r7"] Dec 04 06:32:13 crc kubenswrapper[4832]: I1204 06:32:13.387980 4832 generic.go:334] "Generic (PLEG): container finished" podID="034790c8-6017-4376-bfd7-f8df06245d40" containerID="19630c984a735d57ad6dff7e7a969c5f07904143444e0d4a109333d7e9c2d99a" exitCode=0 Dec 04 06:32:13 crc kubenswrapper[4832]: I1204 06:32:13.388083 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d96r7" event={"ID":"034790c8-6017-4376-bfd7-f8df06245d40","Type":"ContainerDied","Data":"19630c984a735d57ad6dff7e7a969c5f07904143444e0d4a109333d7e9c2d99a"} Dec 04 06:32:13 crc kubenswrapper[4832]: I1204 06:32:13.388355 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d96r7" event={"ID":"034790c8-6017-4376-bfd7-f8df06245d40","Type":"ContainerStarted","Data":"917a8a44f5461c4600fe8c632bd53ef4fe5f1118f217a00d2ef08dfcf6bd258b"} Dec 04 06:32:14 crc kubenswrapper[4832]: I1204 06:32:14.399481 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d96r7" event={"ID":"034790c8-6017-4376-bfd7-f8df06245d40","Type":"ContainerStarted","Data":"6a9af1e95af8e0147500c723425da45d54d77fbb417e2406a3047eb61e8f317f"} Dec 04 06:32:14 crc kubenswrapper[4832]: I1204 06:32:14.649588 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 06:32:15 crc kubenswrapper[4832]: I1204 06:32:15.412011 4832 generic.go:334] "Generic (PLEG): container finished" podID="034790c8-6017-4376-bfd7-f8df06245d40" containerID="6a9af1e95af8e0147500c723425da45d54d77fbb417e2406a3047eb61e8f317f" exitCode=0 Dec 04 06:32:15 crc kubenswrapper[4832]: I1204 06:32:15.412062 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d96r7" event={"ID":"034790c8-6017-4376-bfd7-f8df06245d40","Type":"ContainerDied","Data":"6a9af1e95af8e0147500c723425da45d54d77fbb417e2406a3047eb61e8f317f"} Dec 04 06:32:15 crc kubenswrapper[4832]: I1204 06:32:15.928886 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 06:32:16 crc kubenswrapper[4832]: I1204 06:32:16.425421 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d96r7" event={"ID":"034790c8-6017-4376-bfd7-f8df06245d40","Type":"ContainerStarted","Data":"c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae"} Dec 04 06:32:16 crc kubenswrapper[4832]: I1204 06:32:16.453439 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d96r7" podStartSLOduration=1.806854671 podStartE2EDuration="4.453420796s" podCreationTimestamp="2025-12-04 06:32:12 +0000 UTC" firstStartedPulling="2025-12-04 06:32:13.39003941 +0000 UTC m=+1389.002857116" lastFinishedPulling="2025-12-04 06:32:16.036605535 +0000 UTC m=+1391.649423241" observedRunningTime="2025-12-04 06:32:16.44669558 +0000 UTC m=+1392.059513286" watchObservedRunningTime="2025-12-04 06:32:16.453420796 +0000 UTC m=+1392.066238502" Dec 04 06:32:19 crc kubenswrapper[4832]: I1204 06:32:19.952246 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" containerName="rabbitmq" containerID="cri-o://a4fa05ed35c11a542a8d0eb9a657526f567af5481c9c946931f663f314dd361d" gracePeriod=604795 Dec 04 06:32:20 crc kubenswrapper[4832]: I1204 06:32:20.752688 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="1d41c5c2-5373-423b-b14f-00c902111ee3" containerName="rabbitmq" containerID="cri-o://265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c" gracePeriod=604796 Dec 04 06:32:22 crc kubenswrapper[4832]: I1204 06:32:22.390137 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:22 crc kubenswrapper[4832]: I1204 06:32:22.390542 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:23 crc kubenswrapper[4832]: I1204 06:32:23.446248 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d96r7" podUID="034790c8-6017-4376-bfd7-f8df06245d40" containerName="registry-server" probeResult="failure" output=< Dec 04 06:32:23 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Dec 04 06:32:23 crc kubenswrapper[4832]: > Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.543028 4832 generic.go:334] "Generic (PLEG): container finished" podID="0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" containerID="a4fa05ed35c11a542a8d0eb9a657526f567af5481c9c946931f663f314dd361d" exitCode=0 Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.543210 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3","Type":"ContainerDied","Data":"a4fa05ed35c11a542a8d0eb9a657526f567af5481c9c946931f663f314dd361d"} Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.543933 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3","Type":"ContainerDied","Data":"98376ab0a149a0dbcf39f8f680d48fa1374dc2756604bfc47f99019c2ed2cbfe"} Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.543956 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98376ab0a149a0dbcf39f8f680d48fa1374dc2756604bfc47f99019c2ed2cbfe" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.608855 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.738695 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-server-conf\") pod \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.738781 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-tls\") pod \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.738877 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-confd\") pod \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.738920 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-erlang-cookie\") pod \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.739023 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-pod-info\") pod \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.739062 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-config-data\") pod \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.739152 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-plugins\") pod \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.739191 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.739220 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pvjv\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-kube-api-access-5pvjv\") pod \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.739262 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-erlang-cookie-secret\") pod \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.739302 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-plugins-conf\") pod \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.739621 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" (UID: "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.739698 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" (UID: "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.740534 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" (UID: "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.741115 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.741137 4832 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.741148 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.748450 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" (UID: "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.751133 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-pod-info" (OuterVolumeSpecName: "pod-info") pod "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" (UID: "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.751538 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" (UID: "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.753040 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-kube-api-access-5pvjv" (OuterVolumeSpecName: "kube-api-access-5pvjv") pod "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" (UID: "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3"). InnerVolumeSpecName "kube-api-access-5pvjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.760181 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" (UID: "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.813881 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-config-data" (OuterVolumeSpecName: "config-data") pod "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" (UID: "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.842556 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-server-conf" (OuterVolumeSpecName: "server-conf") pod "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" (UID: "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.845298 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-server-conf\") pod \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\" (UID: \"0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3\") " Dec 04 06:32:26 crc kubenswrapper[4832]: W1204 06:32:26.845593 4832 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3/volumes/kubernetes.io~configmap/server-conf Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.845620 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-server-conf" (OuterVolumeSpecName: "server-conf") pod "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" (UID: "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.846380 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.846684 4832 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-pod-info\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.846723 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.846762 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.846776 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pvjv\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-kube-api-access-5pvjv\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.846790 4832 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.846801 4832 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-server-conf\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.883819 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" (UID: "0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.888819 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.948353 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:26 crc kubenswrapper[4832]: I1204 06:32:26.948704 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.413615 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.456176 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-confd\") pod \"1d41c5c2-5373-423b-b14f-00c902111ee3\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.456237 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-plugins\") pod \"1d41c5c2-5373-423b-b14f-00c902111ee3\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.456286 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfkdz\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-kube-api-access-xfkdz\") pod \"1d41c5c2-5373-423b-b14f-00c902111ee3\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.456311 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-plugins-conf\") pod \"1d41c5c2-5373-423b-b14f-00c902111ee3\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.456341 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"1d41c5c2-5373-423b-b14f-00c902111ee3\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.456381 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d41c5c2-5373-423b-b14f-00c902111ee3-pod-info\") pod \"1d41c5c2-5373-423b-b14f-00c902111ee3\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.456424 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d41c5c2-5373-423b-b14f-00c902111ee3-erlang-cookie-secret\") pod \"1d41c5c2-5373-423b-b14f-00c902111ee3\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.456462 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-config-data\") pod \"1d41c5c2-5373-423b-b14f-00c902111ee3\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.456509 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-erlang-cookie\") pod \"1d41c5c2-5373-423b-b14f-00c902111ee3\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.456540 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-tls\") pod \"1d41c5c2-5373-423b-b14f-00c902111ee3\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.456602 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-server-conf\") pod \"1d41c5c2-5373-423b-b14f-00c902111ee3\" (UID: \"1d41c5c2-5373-423b-b14f-00c902111ee3\") " Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.456996 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1d41c5c2-5373-423b-b14f-00c902111ee3" (UID: "1d41c5c2-5373-423b-b14f-00c902111ee3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.457242 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1d41c5c2-5373-423b-b14f-00c902111ee3" (UID: "1d41c5c2-5373-423b-b14f-00c902111ee3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.457532 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1d41c5c2-5373-423b-b14f-00c902111ee3" (UID: "1d41c5c2-5373-423b-b14f-00c902111ee3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.473640 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1d41c5c2-5373-423b-b14f-00c902111ee3" (UID: "1d41c5c2-5373-423b-b14f-00c902111ee3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.479236 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1d41c5c2-5373-423b-b14f-00c902111ee3-pod-info" (OuterVolumeSpecName: "pod-info") pod "1d41c5c2-5373-423b-b14f-00c902111ee3" (UID: "1d41c5c2-5373-423b-b14f-00c902111ee3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.479740 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-kube-api-access-xfkdz" (OuterVolumeSpecName: "kube-api-access-xfkdz") pod "1d41c5c2-5373-423b-b14f-00c902111ee3" (UID: "1d41c5c2-5373-423b-b14f-00c902111ee3"). InnerVolumeSpecName "kube-api-access-xfkdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.505060 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "1d41c5c2-5373-423b-b14f-00c902111ee3" (UID: "1d41c5c2-5373-423b-b14f-00c902111ee3"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.526187 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d41c5c2-5373-423b-b14f-00c902111ee3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1d41c5c2-5373-423b-b14f-00c902111ee3" (UID: "1d41c5c2-5373-423b-b14f-00c902111ee3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.558198 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.558242 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfkdz\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-kube-api-access-xfkdz\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.558256 4832 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.558287 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.558300 4832 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d41c5c2-5373-423b-b14f-00c902111ee3-pod-info\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.558312 4832 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d41c5c2-5373-423b-b14f-00c902111ee3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.558325 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.558338 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.591716 4832 generic.go:334] "Generic (PLEG): container finished" podID="1d41c5c2-5373-423b-b14f-00c902111ee3" containerID="265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c" exitCode=0 Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.591813 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.592089 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.591904 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1d41c5c2-5373-423b-b14f-00c902111ee3","Type":"ContainerDied","Data":"265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c"} Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.592330 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1d41c5c2-5373-423b-b14f-00c902111ee3","Type":"ContainerDied","Data":"68f3ab45862f4bd190c9f4d90c8ea2c64006a6c1ad29c64ad70b40c5c740f66a"} Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.592382 4832 scope.go:117] "RemoveContainer" containerID="265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.695662 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-config-data" (OuterVolumeSpecName: "config-data") pod "1d41c5c2-5373-423b-b14f-00c902111ee3" (UID: "1d41c5c2-5373-423b-b14f-00c902111ee3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.711944 4832 scope.go:117] "RemoveContainer" containerID="acd88f39d1f1df79a40de06ac24a1db1091f12f9d42d95a1aefade8dd94e4663" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.747699 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.764595 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.778867 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.778901 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.796916 4832 scope.go:117] "RemoveContainer" containerID="265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c" Dec 04 06:32:27 crc kubenswrapper[4832]: E1204 06:32:27.798418 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c\": container with ID starting with 265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c not found: ID does not exist" containerID="265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.798485 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c"} err="failed to get container status \"265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c\": rpc error: code = NotFound desc = could not find container \"265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c\": container with ID starting with 265752b34380434fdd5bb54c07ba0fb218ab275634787d06020124c5de78b05c not found: ID does not exist" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.798515 4832 scope.go:117] "RemoveContainer" containerID="acd88f39d1f1df79a40de06ac24a1db1091f12f9d42d95a1aefade8dd94e4663" Dec 04 06:32:27 crc kubenswrapper[4832]: E1204 06:32:27.799669 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acd88f39d1f1df79a40de06ac24a1db1091f12f9d42d95a1aefade8dd94e4663\": container with ID starting with acd88f39d1f1df79a40de06ac24a1db1091f12f9d42d95a1aefade8dd94e4663 not found: ID does not exist" containerID="acd88f39d1f1df79a40de06ac24a1db1091f12f9d42d95a1aefade8dd94e4663" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.799713 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd88f39d1f1df79a40de06ac24a1db1091f12f9d42d95a1aefade8dd94e4663"} err="failed to get container status \"acd88f39d1f1df79a40de06ac24a1db1091f12f9d42d95a1aefade8dd94e4663\": rpc error: code = NotFound desc = could not find container \"acd88f39d1f1df79a40de06ac24a1db1091f12f9d42d95a1aefade8dd94e4663\": container with ID starting with acd88f39d1f1df79a40de06ac24a1db1091f12f9d42d95a1aefade8dd94e4663 not found: ID does not exist" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.805987 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-server-conf" (OuterVolumeSpecName: "server-conf") pod "1d41c5c2-5373-423b-b14f-00c902111ee3" (UID: "1d41c5c2-5373-423b-b14f-00c902111ee3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.806053 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.818434 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 06:32:27 crc kubenswrapper[4832]: E1204 06:32:27.818986 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d41c5c2-5373-423b-b14f-00c902111ee3" containerName="rabbitmq" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.819005 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d41c5c2-5373-423b-b14f-00c902111ee3" containerName="rabbitmq" Dec 04 06:32:27 crc kubenswrapper[4832]: E1204 06:32:27.819022 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" containerName="rabbitmq" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.819029 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" containerName="rabbitmq" Dec 04 06:32:27 crc kubenswrapper[4832]: E1204 06:32:27.819052 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d41c5c2-5373-423b-b14f-00c902111ee3" containerName="setup-container" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.819059 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d41c5c2-5373-423b-b14f-00c902111ee3" containerName="setup-container" Dec 04 06:32:27 crc kubenswrapper[4832]: E1204 06:32:27.819078 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" containerName="setup-container" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.819084 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" containerName="setup-container" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.819289 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d41c5c2-5373-423b-b14f-00c902111ee3" containerName="rabbitmq" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.819316 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" containerName="rabbitmq" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.820533 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.822518 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.822933 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.828308 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1d41c5c2-5373-423b-b14f-00c902111ee3" (UID: "1d41c5c2-5373-423b-b14f-00c902111ee3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.828560 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.828778 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.828930 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.829080 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.829360 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-b2fcj" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.843130 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.880359 4832 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d41c5c2-5373-423b-b14f-00c902111ee3-server-conf\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.880404 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d41c5c2-5373-423b-b14f-00c902111ee3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.927642 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.942022 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.968206 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.969797 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.973464 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.973715 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gzxvp" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.974532 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.974618 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.976218 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.977575 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.977606 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.984177 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5152b11-80fa-4fd7-90df-132972214b18-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.984239 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5vkz\" (UniqueName: \"kubernetes.io/projected/b5152b11-80fa-4fd7-90df-132972214b18-kube-api-access-x5vkz\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.984263 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5152b11-80fa-4fd7-90df-132972214b18-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.984341 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.984375 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5152b11-80fa-4fd7-90df-132972214b18-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.984423 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5152b11-80fa-4fd7-90df-132972214b18-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.984439 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5152b11-80fa-4fd7-90df-132972214b18-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.984463 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5152b11-80fa-4fd7-90df-132972214b18-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.984501 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5152b11-80fa-4fd7-90df-132972214b18-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.984532 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5152b11-80fa-4fd7-90df-132972214b18-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.984550 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5152b11-80fa-4fd7-90df-132972214b18-config-data\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:27 crc kubenswrapper[4832]: I1204 06:32:27.997466 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.086182 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5152b11-80fa-4fd7-90df-132972214b18-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.086566 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65d1124e-f647-4d3c-b10e-c01691fb6c9b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.086721 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5152b11-80fa-4fd7-90df-132972214b18-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.086814 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5152b11-80fa-4fd7-90df-132972214b18-config-data\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.086901 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65d1124e-f647-4d3c-b10e-c01691fb6c9b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.086977 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5152b11-80fa-4fd7-90df-132972214b18-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.086980 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65d1124e-f647-4d3c-b10e-c01691fb6c9b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.087777 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5152b11-80fa-4fd7-90df-132972214b18-config-data\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.088236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5152b11-80fa-4fd7-90df-132972214b18-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.088272 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.088292 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5vkz\" (UniqueName: \"kubernetes.io/projected/b5152b11-80fa-4fd7-90df-132972214b18-kube-api-access-x5vkz\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.088309 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65d1124e-f647-4d3c-b10e-c01691fb6c9b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.088330 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5152b11-80fa-4fd7-90df-132972214b18-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.088351 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65d1124e-f647-4d3c-b10e-c01691fb6c9b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.088380 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65d1124e-f647-4d3c-b10e-c01691fb6c9b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.088429 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65d1124e-f647-4d3c-b10e-c01691fb6c9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.088444 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65d1124e-f647-4d3c-b10e-c01691fb6c9b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.088469 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65d1124e-f647-4d3c-b10e-c01691fb6c9b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.089104 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.089180 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8hc6\" (UniqueName: \"kubernetes.io/projected/65d1124e-f647-4d3c-b10e-c01691fb6c9b-kube-api-access-h8hc6\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.089231 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5152b11-80fa-4fd7-90df-132972214b18-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.089281 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.089408 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5152b11-80fa-4fd7-90df-132972214b18-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.089440 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5152b11-80fa-4fd7-90df-132972214b18-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.089468 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5152b11-80fa-4fd7-90df-132972214b18-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.089801 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5152b11-80fa-4fd7-90df-132972214b18-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.090447 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5152b11-80fa-4fd7-90df-132972214b18-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.090706 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5152b11-80fa-4fd7-90df-132972214b18-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.093586 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5152b11-80fa-4fd7-90df-132972214b18-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.095536 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5152b11-80fa-4fd7-90df-132972214b18-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.096187 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5152b11-80fa-4fd7-90df-132972214b18-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.108840 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5152b11-80fa-4fd7-90df-132972214b18-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.112869 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5vkz\" (UniqueName: \"kubernetes.io/projected/b5152b11-80fa-4fd7-90df-132972214b18-kube-api-access-x5vkz\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.134674 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b5152b11-80fa-4fd7-90df-132972214b18\") " pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.143648 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.191301 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65d1124e-f647-4d3c-b10e-c01691fb6c9b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.192233 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65d1124e-f647-4d3c-b10e-c01691fb6c9b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.192601 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65d1124e-f647-4d3c-b10e-c01691fb6c9b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.192766 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.192876 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65d1124e-f647-4d3c-b10e-c01691fb6c9b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.193070 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65d1124e-f647-4d3c-b10e-c01691fb6c9b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.193220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65d1124e-f647-4d3c-b10e-c01691fb6c9b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.193370 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65d1124e-f647-4d3c-b10e-c01691fb6c9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.193500 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65d1124e-f647-4d3c-b10e-c01691fb6c9b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.193625 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65d1124e-f647-4d3c-b10e-c01691fb6c9b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.194900 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65d1124e-f647-4d3c-b10e-c01691fb6c9b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.194099 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.194175 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65d1124e-f647-4d3c-b10e-c01691fb6c9b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.194288 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65d1124e-f647-4d3c-b10e-c01691fb6c9b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.194577 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65d1124e-f647-4d3c-b10e-c01691fb6c9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.193885 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65d1124e-f647-4d3c-b10e-c01691fb6c9b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.196018 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8hc6\" (UniqueName: \"kubernetes.io/projected/65d1124e-f647-4d3c-b10e-c01691fb6c9b-kube-api-access-h8hc6\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.199027 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65d1124e-f647-4d3c-b10e-c01691fb6c9b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.202227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65d1124e-f647-4d3c-b10e-c01691fb6c9b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.204810 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65d1124e-f647-4d3c-b10e-c01691fb6c9b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.211133 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65d1124e-f647-4d3c-b10e-c01691fb6c9b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.225327 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8hc6\" (UniqueName: \"kubernetes.io/projected/65d1124e-f647-4d3c-b10e-c01691fb6c9b-kube-api-access-h8hc6\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.240737 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"65d1124e-f647-4d3c-b10e-c01691fb6c9b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.291837 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.678317 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.731045 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3" path="/var/lib/kubelet/pods/0ee9dc35-7baf-448f-a6fc-3f73c1b5d6f3/volumes" Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.731921 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d41c5c2-5373-423b-b14f-00c902111ee3" path="/var/lib/kubelet/pods/1d41c5c2-5373-423b-b14f-00c902111ee3/volumes" Dec 04 06:32:28 crc kubenswrapper[4832]: W1204 06:32:28.850237 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65d1124e_f647_4d3c_b10e_c01691fb6c9b.slice/crio-24fabd7602eb04dbcd84ae0963f761a1b53e1aa330051633334a2e64ab95fc85 WatchSource:0}: Error finding container 24fabd7602eb04dbcd84ae0963f761a1b53e1aa330051633334a2e64ab95fc85: Status 404 returned error can't find the container with id 24fabd7602eb04dbcd84ae0963f761a1b53e1aa330051633334a2e64ab95fc85 Dec 04 06:32:28 crc kubenswrapper[4832]: I1204 06:32:28.856050 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.562357 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kj8pn"] Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.564941 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.571488 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.604457 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kj8pn"] Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.631109 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.631161 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.631201 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.631231 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-config\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.631269 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.631313 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfn6d\" (UniqueName: \"kubernetes.io/projected/a4e0482e-2a3f-4838-8143-2e7671a8c819-kube-api-access-tfn6d\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.631338 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.638141 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65d1124e-f647-4d3c-b10e-c01691fb6c9b","Type":"ContainerStarted","Data":"24fabd7602eb04dbcd84ae0963f761a1b53e1aa330051633334a2e64ab95fc85"} Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.640793 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5152b11-80fa-4fd7-90df-132972214b18","Type":"ContainerStarted","Data":"e04dda4b5c00dda57076f0b5ad6212f2eb818d9b4cf96c4964f2a1120213d7cb"} Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.734171 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.734246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-config\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.734296 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.734361 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfn6d\" (UniqueName: \"kubernetes.io/projected/a4e0482e-2a3f-4838-8143-2e7671a8c819-kube-api-access-tfn6d\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.734414 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.734623 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.734812 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.735964 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.736784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.736957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.737252 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-config\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.737417 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.737474 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.849880 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfn6d\" (UniqueName: \"kubernetes.io/projected/a4e0482e-2a3f-4838-8143-2e7671a8c819-kube-api-access-tfn6d\") pod \"dnsmasq-dns-79bd4cc8c9-kj8pn\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:29 crc kubenswrapper[4832]: I1204 06:32:29.903975 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:30 crc kubenswrapper[4832]: I1204 06:32:30.388326 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kj8pn"] Dec 04 06:32:30 crc kubenswrapper[4832]: I1204 06:32:30.681926 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" event={"ID":"a4e0482e-2a3f-4838-8143-2e7671a8c819","Type":"ContainerStarted","Data":"76253bb05621ae7a5686e32b55b503026a44290aaa8875a892fe3d7d37cecbbe"} Dec 04 06:32:30 crc kubenswrapper[4832]: I1204 06:32:30.687361 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5152b11-80fa-4fd7-90df-132972214b18","Type":"ContainerStarted","Data":"faf9ed34e1f75dc4e5e42d7c87f299371d77b96c038546d9bfc5281f30c85c5a"} Dec 04 06:32:31 crc kubenswrapper[4832]: I1204 06:32:31.697866 4832 generic.go:334] "Generic (PLEG): container finished" podID="a4e0482e-2a3f-4838-8143-2e7671a8c819" containerID="1e7a3203579331443e8cdba70538c47fa8653c14ac8661b425d81d771651d825" exitCode=0 Dec 04 06:32:31 crc kubenswrapper[4832]: I1204 06:32:31.697944 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" event={"ID":"a4e0482e-2a3f-4838-8143-2e7671a8c819","Type":"ContainerDied","Data":"1e7a3203579331443e8cdba70538c47fa8653c14ac8661b425d81d771651d825"} Dec 04 06:32:31 crc kubenswrapper[4832]: I1204 06:32:31.700075 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65d1124e-f647-4d3c-b10e-c01691fb6c9b","Type":"ContainerStarted","Data":"11f09053895f74ba79865976c7a4bb124bccf33e75984887e16c3507f1d9d0be"} Dec 04 06:32:32 crc kubenswrapper[4832]: I1204 06:32:32.439886 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:32 crc kubenswrapper[4832]: I1204 06:32:32.489921 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:32 crc kubenswrapper[4832]: I1204 06:32:32.682443 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d96r7"] Dec 04 06:32:32 crc kubenswrapper[4832]: I1204 06:32:32.720325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" event={"ID":"a4e0482e-2a3f-4838-8143-2e7671a8c819","Type":"ContainerStarted","Data":"e7b4dff811d7bf4bd6c8ad46f5a4e7618c27c261395c79c000b939d4a9ba1a90"} Dec 04 06:32:32 crc kubenswrapper[4832]: I1204 06:32:32.740421 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" podStartSLOduration=3.740384541 podStartE2EDuration="3.740384541s" podCreationTimestamp="2025-12-04 06:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:32:32.729949413 +0000 UTC m=+1408.342767119" watchObservedRunningTime="2025-12-04 06:32:32.740384541 +0000 UTC m=+1408.353202247" Dec 04 06:32:33 crc kubenswrapper[4832]: I1204 06:32:33.726518 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:33 crc kubenswrapper[4832]: I1204 06:32:33.726646 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d96r7" podUID="034790c8-6017-4376-bfd7-f8df06245d40" containerName="registry-server" containerID="cri-o://c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae" gracePeriod=2 Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.251815 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.349616 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/034790c8-6017-4376-bfd7-f8df06245d40-utilities\") pod \"034790c8-6017-4376-bfd7-f8df06245d40\" (UID: \"034790c8-6017-4376-bfd7-f8df06245d40\") " Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.349723 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/034790c8-6017-4376-bfd7-f8df06245d40-catalog-content\") pod \"034790c8-6017-4376-bfd7-f8df06245d40\" (UID: \"034790c8-6017-4376-bfd7-f8df06245d40\") " Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.349844 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2f89\" (UniqueName: \"kubernetes.io/projected/034790c8-6017-4376-bfd7-f8df06245d40-kube-api-access-n2f89\") pod \"034790c8-6017-4376-bfd7-f8df06245d40\" (UID: \"034790c8-6017-4376-bfd7-f8df06245d40\") " Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.350714 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/034790c8-6017-4376-bfd7-f8df06245d40-utilities" (OuterVolumeSpecName: "utilities") pod "034790c8-6017-4376-bfd7-f8df06245d40" (UID: "034790c8-6017-4376-bfd7-f8df06245d40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.359809 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/034790c8-6017-4376-bfd7-f8df06245d40-kube-api-access-n2f89" (OuterVolumeSpecName: "kube-api-access-n2f89") pod "034790c8-6017-4376-bfd7-f8df06245d40" (UID: "034790c8-6017-4376-bfd7-f8df06245d40"). InnerVolumeSpecName "kube-api-access-n2f89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.452530 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/034790c8-6017-4376-bfd7-f8df06245d40-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.452588 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2f89\" (UniqueName: \"kubernetes.io/projected/034790c8-6017-4376-bfd7-f8df06245d40-kube-api-access-n2f89\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.490732 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/034790c8-6017-4376-bfd7-f8df06245d40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "034790c8-6017-4376-bfd7-f8df06245d40" (UID: "034790c8-6017-4376-bfd7-f8df06245d40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.554777 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/034790c8-6017-4376-bfd7-f8df06245d40-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.743211 4832 generic.go:334] "Generic (PLEG): container finished" podID="034790c8-6017-4376-bfd7-f8df06245d40" containerID="c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae" exitCode=0 Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.743292 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d96r7" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.743291 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d96r7" event={"ID":"034790c8-6017-4376-bfd7-f8df06245d40","Type":"ContainerDied","Data":"c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae"} Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.743340 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d96r7" event={"ID":"034790c8-6017-4376-bfd7-f8df06245d40","Type":"ContainerDied","Data":"917a8a44f5461c4600fe8c632bd53ef4fe5f1118f217a00d2ef08dfcf6bd258b"} Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.743365 4832 scope.go:117] "RemoveContainer" containerID="c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.778123 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d96r7"] Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.783106 4832 scope.go:117] "RemoveContainer" containerID="6a9af1e95af8e0147500c723425da45d54d77fbb417e2406a3047eb61e8f317f" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.792938 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d96r7"] Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.820725 4832 scope.go:117] "RemoveContainer" containerID="19630c984a735d57ad6dff7e7a969c5f07904143444e0d4a109333d7e9c2d99a" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.856454 4832 scope.go:117] "RemoveContainer" containerID="c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae" Dec 04 06:32:34 crc kubenswrapper[4832]: E1204 06:32:34.866782 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae\": container with ID starting with c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae not found: ID does not exist" containerID="c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.866869 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae"} err="failed to get container status \"c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae\": rpc error: code = NotFound desc = could not find container \"c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae\": container with ID starting with c39ed96a0a1c2b3dcb9e9c0e06cd1514671bccf84044e8d1551c59e430752dae not found: ID does not exist" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.866917 4832 scope.go:117] "RemoveContainer" containerID="6a9af1e95af8e0147500c723425da45d54d77fbb417e2406a3047eb61e8f317f" Dec 04 06:32:34 crc kubenswrapper[4832]: E1204 06:32:34.867697 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a9af1e95af8e0147500c723425da45d54d77fbb417e2406a3047eb61e8f317f\": container with ID starting with 6a9af1e95af8e0147500c723425da45d54d77fbb417e2406a3047eb61e8f317f not found: ID does not exist" containerID="6a9af1e95af8e0147500c723425da45d54d77fbb417e2406a3047eb61e8f317f" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.867768 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a9af1e95af8e0147500c723425da45d54d77fbb417e2406a3047eb61e8f317f"} err="failed to get container status \"6a9af1e95af8e0147500c723425da45d54d77fbb417e2406a3047eb61e8f317f\": rpc error: code = NotFound desc = could not find container \"6a9af1e95af8e0147500c723425da45d54d77fbb417e2406a3047eb61e8f317f\": container with ID starting with 6a9af1e95af8e0147500c723425da45d54d77fbb417e2406a3047eb61e8f317f not found: ID does not exist" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.867813 4832 scope.go:117] "RemoveContainer" containerID="19630c984a735d57ad6dff7e7a969c5f07904143444e0d4a109333d7e9c2d99a" Dec 04 06:32:34 crc kubenswrapper[4832]: E1204 06:32:34.868748 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19630c984a735d57ad6dff7e7a969c5f07904143444e0d4a109333d7e9c2d99a\": container with ID starting with 19630c984a735d57ad6dff7e7a969c5f07904143444e0d4a109333d7e9c2d99a not found: ID does not exist" containerID="19630c984a735d57ad6dff7e7a969c5f07904143444e0d4a109333d7e9c2d99a" Dec 04 06:32:34 crc kubenswrapper[4832]: I1204 06:32:34.869538 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19630c984a735d57ad6dff7e7a969c5f07904143444e0d4a109333d7e9c2d99a"} err="failed to get container status \"19630c984a735d57ad6dff7e7a969c5f07904143444e0d4a109333d7e9c2d99a\": rpc error: code = NotFound desc = could not find container \"19630c984a735d57ad6dff7e7a969c5f07904143444e0d4a109333d7e9c2d99a\": container with ID starting with 19630c984a735d57ad6dff7e7a969c5f07904143444e0d4a109333d7e9c2d99a not found: ID does not exist" Dec 04 06:32:36 crc kubenswrapper[4832]: I1204 06:32:36.727699 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="034790c8-6017-4376-bfd7-f8df06245d40" path="/var/lib/kubelet/pods/034790c8-6017-4376-bfd7-f8df06245d40/volumes" Dec 04 06:32:39 crc kubenswrapper[4832]: I1204 06:32:39.906660 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.016702 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xj58h"] Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.017025 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" podUID="59983b29-268f-440b-a57c-7d3584241778" containerName="dnsmasq-dns" containerID="cri-o://912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463" gracePeriod=10 Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.189856 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" podUID="59983b29-268f-440b-a57c-7d3584241778" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: connect: connection refused" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.356000 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-wpmzl"] Dec 04 06:32:40 crc kubenswrapper[4832]: E1204 06:32:40.356729 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="034790c8-6017-4376-bfd7-f8df06245d40" containerName="extract-content" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.356750 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="034790c8-6017-4376-bfd7-f8df06245d40" containerName="extract-content" Dec 04 06:32:40 crc kubenswrapper[4832]: E1204 06:32:40.356773 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="034790c8-6017-4376-bfd7-f8df06245d40" containerName="registry-server" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.356781 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="034790c8-6017-4376-bfd7-f8df06245d40" containerName="registry-server" Dec 04 06:32:40 crc kubenswrapper[4832]: E1204 06:32:40.356799 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="034790c8-6017-4376-bfd7-f8df06245d40" containerName="extract-utilities" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.356850 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="034790c8-6017-4376-bfd7-f8df06245d40" containerName="extract-utilities" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.357147 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="034790c8-6017-4376-bfd7-f8df06245d40" containerName="registry-server" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.359756 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.375673 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-wpmzl"] Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.400372 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-dns-svc\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.400636 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.400667 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.400694 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-config\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.400765 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.400970 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.401135 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp7hv\" (UniqueName: \"kubernetes.io/projected/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-kube-api-access-jp7hv\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.503895 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-dns-svc\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.504013 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.504047 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.504074 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-config\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.504139 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.504224 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.504283 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp7hv\" (UniqueName: \"kubernetes.io/projected/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-kube-api-access-jp7hv\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.505600 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-dns-svc\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.505733 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.507331 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-config\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.508611 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.509522 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.509596 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.536521 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp7hv\" (UniqueName: \"kubernetes.io/projected/fc3ad9fb-9341-4b1f-8b27-ee71d9f37309-kube-api-access-jp7hv\") pod \"dnsmasq-dns-55478c4467-wpmzl\" (UID: \"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309\") " pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.631163 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.697752 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.708725 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-dns-svc\") pod \"59983b29-268f-440b-a57c-7d3584241778\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.708809 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-config\") pod \"59983b29-268f-440b-a57c-7d3584241778\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.708896 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qmt9\" (UniqueName: \"kubernetes.io/projected/59983b29-268f-440b-a57c-7d3584241778-kube-api-access-9qmt9\") pod \"59983b29-268f-440b-a57c-7d3584241778\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.708921 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-ovsdbserver-nb\") pod \"59983b29-268f-440b-a57c-7d3584241778\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.708974 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-dns-swift-storage-0\") pod \"59983b29-268f-440b-a57c-7d3584241778\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.709605 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-ovsdbserver-sb\") pod \"59983b29-268f-440b-a57c-7d3584241778\" (UID: \"59983b29-268f-440b-a57c-7d3584241778\") " Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.749215 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59983b29-268f-440b-a57c-7d3584241778-kube-api-access-9qmt9" (OuterVolumeSpecName: "kube-api-access-9qmt9") pod "59983b29-268f-440b-a57c-7d3584241778" (UID: "59983b29-268f-440b-a57c-7d3584241778"). InnerVolumeSpecName "kube-api-access-9qmt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.774619 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "59983b29-268f-440b-a57c-7d3584241778" (UID: "59983b29-268f-440b-a57c-7d3584241778"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.779511 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "59983b29-268f-440b-a57c-7d3584241778" (UID: "59983b29-268f-440b-a57c-7d3584241778"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.789724 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-config" (OuterVolumeSpecName: "config") pod "59983b29-268f-440b-a57c-7d3584241778" (UID: "59983b29-268f-440b-a57c-7d3584241778"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.804556 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "59983b29-268f-440b-a57c-7d3584241778" (UID: "59983b29-268f-440b-a57c-7d3584241778"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.812953 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.813008 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qmt9\" (UniqueName: \"kubernetes.io/projected/59983b29-268f-440b-a57c-7d3584241778-kube-api-access-9qmt9\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.813024 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.813038 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.813049 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.816015 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59983b29-268f-440b-a57c-7d3584241778" (UID: "59983b29-268f-440b-a57c-7d3584241778"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.825427 4832 generic.go:334] "Generic (PLEG): container finished" podID="59983b29-268f-440b-a57c-7d3584241778" containerID="912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463" exitCode=0 Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.825505 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.825527 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" event={"ID":"59983b29-268f-440b-a57c-7d3584241778","Type":"ContainerDied","Data":"912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463"} Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.825603 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xj58h" event={"ID":"59983b29-268f-440b-a57c-7d3584241778","Type":"ContainerDied","Data":"6675c14cd4d65ac05de6e3fe68091a859eec4eabc9d7bd8d0cd7f65ccc75659c"} Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.825639 4832 scope.go:117] "RemoveContainer" containerID="912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.856606 4832 scope.go:117] "RemoveContainer" containerID="225c551036dc11d12028717f3d3cb9460032192ff28bcd75f1da9ff5b6e8b714" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.871583 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xj58h"] Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.884059 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xj58h"] Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.899252 4832 scope.go:117] "RemoveContainer" containerID="912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463" Dec 04 06:32:40 crc kubenswrapper[4832]: E1204 06:32:40.900452 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463\": container with ID starting with 912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463 not found: ID does not exist" containerID="912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.900525 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463"} err="failed to get container status \"912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463\": rpc error: code = NotFound desc = could not find container \"912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463\": container with ID starting with 912f66e51fdf359428ff07b031ae1514da4686aef2a02d9d08821cd7bddb6463 not found: ID does not exist" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.900570 4832 scope.go:117] "RemoveContainer" containerID="225c551036dc11d12028717f3d3cb9460032192ff28bcd75f1da9ff5b6e8b714" Dec 04 06:32:40 crc kubenswrapper[4832]: E1204 06:32:40.900912 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"225c551036dc11d12028717f3d3cb9460032192ff28bcd75f1da9ff5b6e8b714\": container with ID starting with 225c551036dc11d12028717f3d3cb9460032192ff28bcd75f1da9ff5b6e8b714 not found: ID does not exist" containerID="225c551036dc11d12028717f3d3cb9460032192ff28bcd75f1da9ff5b6e8b714" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.900939 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225c551036dc11d12028717f3d3cb9460032192ff28bcd75f1da9ff5b6e8b714"} err="failed to get container status \"225c551036dc11d12028717f3d3cb9460032192ff28bcd75f1da9ff5b6e8b714\": rpc error: code = NotFound desc = could not find container \"225c551036dc11d12028717f3d3cb9460032192ff28bcd75f1da9ff5b6e8b714\": container with ID starting with 225c551036dc11d12028717f3d3cb9460032192ff28bcd75f1da9ff5b6e8b714 not found: ID does not exist" Dec 04 06:32:40 crc kubenswrapper[4832]: I1204 06:32:40.914875 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59983b29-268f-440b-a57c-7d3584241778-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:41 crc kubenswrapper[4832]: I1204 06:32:41.244171 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-wpmzl"] Dec 04 06:32:41 crc kubenswrapper[4832]: W1204 06:32:41.247437 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc3ad9fb_9341_4b1f_8b27_ee71d9f37309.slice/crio-170af2ace58f85169c73a1cd398f595ff4a704ccde8f636ae4680a1a79b294ff WatchSource:0}: Error finding container 170af2ace58f85169c73a1cd398f595ff4a704ccde8f636ae4680a1a79b294ff: Status 404 returned error can't find the container with id 170af2ace58f85169c73a1cd398f595ff4a704ccde8f636ae4680a1a79b294ff Dec 04 06:32:41 crc kubenswrapper[4832]: I1204 06:32:41.880664 4832 generic.go:334] "Generic (PLEG): container finished" podID="fc3ad9fb-9341-4b1f-8b27-ee71d9f37309" containerID="5f14c15934b1a20e257f6a52dec1420edb041eee2cca977c262d8c21daaaa6ff" exitCode=0 Dec 04 06:32:41 crc kubenswrapper[4832]: I1204 06:32:41.880798 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-wpmzl" event={"ID":"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309","Type":"ContainerDied","Data":"5f14c15934b1a20e257f6a52dec1420edb041eee2cca977c262d8c21daaaa6ff"} Dec 04 06:32:41 crc kubenswrapper[4832]: I1204 06:32:41.881203 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-wpmzl" event={"ID":"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309","Type":"ContainerStarted","Data":"170af2ace58f85169c73a1cd398f595ff4a704ccde8f636ae4680a1a79b294ff"} Dec 04 06:32:42 crc kubenswrapper[4832]: I1204 06:32:42.721712 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59983b29-268f-440b-a57c-7d3584241778" path="/var/lib/kubelet/pods/59983b29-268f-440b-a57c-7d3584241778/volumes" Dec 04 06:32:42 crc kubenswrapper[4832]: I1204 06:32:42.892616 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-wpmzl" event={"ID":"fc3ad9fb-9341-4b1f-8b27-ee71d9f37309","Type":"ContainerStarted","Data":"29d51a77f3e732acaa19414e33ea993859b6a13bbafe6311dae433fe8871e849"} Dec 04 06:32:42 crc kubenswrapper[4832]: I1204 06:32:42.893040 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:42 crc kubenswrapper[4832]: I1204 06:32:42.913607 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-wpmzl" podStartSLOduration=2.9135826849999997 podStartE2EDuration="2.913582685s" podCreationTimestamp="2025-12-04 06:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:32:42.913028402 +0000 UTC m=+1418.525846108" watchObservedRunningTime="2025-12-04 06:32:42.913582685 +0000 UTC m=+1418.526400391" Dec 04 06:32:50 crc kubenswrapper[4832]: I1204 06:32:50.699630 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-wpmzl" Dec 04 06:32:50 crc kubenswrapper[4832]: I1204 06:32:50.781572 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kj8pn"] Dec 04 06:32:50 crc kubenswrapper[4832]: I1204 06:32:50.781958 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" podUID="a4e0482e-2a3f-4838-8143-2e7671a8c819" containerName="dnsmasq-dns" containerID="cri-o://e7b4dff811d7bf4bd6c8ad46f5a4e7618c27c261395c79c000b939d4a9ba1a90" gracePeriod=10 Dec 04 06:32:50 crc kubenswrapper[4832]: I1204 06:32:50.993376 4832 generic.go:334] "Generic (PLEG): container finished" podID="a4e0482e-2a3f-4838-8143-2e7671a8c819" containerID="e7b4dff811d7bf4bd6c8ad46f5a4e7618c27c261395c79c000b939d4a9ba1a90" exitCode=0 Dec 04 06:32:50 crc kubenswrapper[4832]: I1204 06:32:50.993479 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" event={"ID":"a4e0482e-2a3f-4838-8143-2e7671a8c819","Type":"ContainerDied","Data":"e7b4dff811d7bf4bd6c8ad46f5a4e7618c27c261395c79c000b939d4a9ba1a90"} Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.395909 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.485799 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-sb\") pod \"a4e0482e-2a3f-4838-8143-2e7671a8c819\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.485879 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-openstack-edpm-ipam\") pod \"a4e0482e-2a3f-4838-8143-2e7671a8c819\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.486021 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-dns-swift-storage-0\") pod \"a4e0482e-2a3f-4838-8143-2e7671a8c819\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.486215 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-dns-svc\") pod \"a4e0482e-2a3f-4838-8143-2e7671a8c819\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.486325 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfn6d\" (UniqueName: \"kubernetes.io/projected/a4e0482e-2a3f-4838-8143-2e7671a8c819-kube-api-access-tfn6d\") pod \"a4e0482e-2a3f-4838-8143-2e7671a8c819\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.486360 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-config\") pod \"a4e0482e-2a3f-4838-8143-2e7671a8c819\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.486423 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-nb\") pod \"a4e0482e-2a3f-4838-8143-2e7671a8c819\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.498162 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e0482e-2a3f-4838-8143-2e7671a8c819-kube-api-access-tfn6d" (OuterVolumeSpecName: "kube-api-access-tfn6d") pod "a4e0482e-2a3f-4838-8143-2e7671a8c819" (UID: "a4e0482e-2a3f-4838-8143-2e7671a8c819"). InnerVolumeSpecName "kube-api-access-tfn6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.551893 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a4e0482e-2a3f-4838-8143-2e7671a8c819" (UID: "a4e0482e-2a3f-4838-8143-2e7671a8c819"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.556500 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4e0482e-2a3f-4838-8143-2e7671a8c819" (UID: "a4e0482e-2a3f-4838-8143-2e7671a8c819"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.563107 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a4e0482e-2a3f-4838-8143-2e7671a8c819" (UID: "a4e0482e-2a3f-4838-8143-2e7671a8c819"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.566962 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a4e0482e-2a3f-4838-8143-2e7671a8c819" (UID: "a4e0482e-2a3f-4838-8143-2e7671a8c819"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.587109 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4e0482e-2a3f-4838-8143-2e7671a8c819" (UID: "a4e0482e-2a3f-4838-8143-2e7671a8c819"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.588191 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-sb\") pod \"a4e0482e-2a3f-4838-8143-2e7671a8c819\" (UID: \"a4e0482e-2a3f-4838-8143-2e7671a8c819\") " Dec 04 06:32:51 crc kubenswrapper[4832]: W1204 06:32:51.588593 4832 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a4e0482e-2a3f-4838-8143-2e7671a8c819/volumes/kubernetes.io~configmap/ovsdbserver-sb Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.588727 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4e0482e-2a3f-4838-8143-2e7671a8c819" (UID: "a4e0482e-2a3f-4838-8143-2e7671a8c819"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.589677 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.590048 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.590145 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfn6d\" (UniqueName: \"kubernetes.io/projected/a4e0482e-2a3f-4838-8143-2e7671a8c819-kube-api-access-tfn6d\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.590243 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.590335 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.590450 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.593220 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-config" (OuterVolumeSpecName: "config") pod "a4e0482e-2a3f-4838-8143-2e7671a8c819" (UID: "a4e0482e-2a3f-4838-8143-2e7671a8c819"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:32:51 crc kubenswrapper[4832]: I1204 06:32:51.692908 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4e0482e-2a3f-4838-8143-2e7671a8c819-config\") on node \"crc\" DevicePath \"\"" Dec 04 06:32:52 crc kubenswrapper[4832]: I1204 06:32:52.008487 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" event={"ID":"a4e0482e-2a3f-4838-8143-2e7671a8c819","Type":"ContainerDied","Data":"76253bb05621ae7a5686e32b55b503026a44290aaa8875a892fe3d7d37cecbbe"} Dec 04 06:32:52 crc kubenswrapper[4832]: I1204 06:32:52.008578 4832 scope.go:117] "RemoveContainer" containerID="e7b4dff811d7bf4bd6c8ad46f5a4e7618c27c261395c79c000b939d4a9ba1a90" Dec 04 06:32:52 crc kubenswrapper[4832]: I1204 06:32:52.009758 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-kj8pn" Dec 04 06:32:52 crc kubenswrapper[4832]: I1204 06:32:52.036578 4832 scope.go:117] "RemoveContainer" containerID="1e7a3203579331443e8cdba70538c47fa8653c14ac8661b425d81d771651d825" Dec 04 06:32:52 crc kubenswrapper[4832]: I1204 06:32:52.073887 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kj8pn"] Dec 04 06:32:52 crc kubenswrapper[4832]: I1204 06:32:52.087853 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kj8pn"] Dec 04 06:32:52 crc kubenswrapper[4832]: I1204 06:32:52.723965 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e0482e-2a3f-4838-8143-2e7671a8c819" path="/var/lib/kubelet/pods/a4e0482e-2a3f-4838-8143-2e7671a8c819/volumes" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.120347 4832 generic.go:334] "Generic (PLEG): container finished" podID="b5152b11-80fa-4fd7-90df-132972214b18" containerID="faf9ed34e1f75dc4e5e42d7c87f299371d77b96c038546d9bfc5281f30c85c5a" exitCode=0 Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.120433 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5152b11-80fa-4fd7-90df-132972214b18","Type":"ContainerDied","Data":"faf9ed34e1f75dc4e5e42d7c87f299371d77b96c038546d9bfc5281f30c85c5a"} Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.123461 4832 generic.go:334] "Generic (PLEG): container finished" podID="65d1124e-f647-4d3c-b10e-c01691fb6c9b" containerID="11f09053895f74ba79865976c7a4bb124bccf33e75984887e16c3507f1d9d0be" exitCode=0 Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.123504 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65d1124e-f647-4d3c-b10e-c01691fb6c9b","Type":"ContainerDied","Data":"11f09053895f74ba79865976c7a4bb124bccf33e75984887e16c3507f1d9d0be"} Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.275307 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh"] Dec 04 06:33:03 crc kubenswrapper[4832]: E1204 06:33:03.276687 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59983b29-268f-440b-a57c-7d3584241778" containerName="dnsmasq-dns" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.276800 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="59983b29-268f-440b-a57c-7d3584241778" containerName="dnsmasq-dns" Dec 04 06:33:03 crc kubenswrapper[4832]: E1204 06:33:03.276909 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59983b29-268f-440b-a57c-7d3584241778" containerName="init" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.276922 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="59983b29-268f-440b-a57c-7d3584241778" containerName="init" Dec 04 06:33:03 crc kubenswrapper[4832]: E1204 06:33:03.276942 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e0482e-2a3f-4838-8143-2e7671a8c819" containerName="dnsmasq-dns" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.276949 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e0482e-2a3f-4838-8143-2e7671a8c819" containerName="dnsmasq-dns" Dec 04 06:33:03 crc kubenswrapper[4832]: E1204 06:33:03.276980 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e0482e-2a3f-4838-8143-2e7671a8c819" containerName="init" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.277055 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e0482e-2a3f-4838-8143-2e7671a8c819" containerName="init" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.277324 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="59983b29-268f-440b-a57c-7d3584241778" containerName="dnsmasq-dns" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.277347 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e0482e-2a3f-4838-8143-2e7671a8c819" containerName="dnsmasq-dns" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.278115 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.284526 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh"] Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.284826 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.285027 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.285292 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.286011 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.469363 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.469551 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gthl\" (UniqueName: \"kubernetes.io/projected/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-kube-api-access-6gthl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.469627 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.469722 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.570889 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gthl\" (UniqueName: \"kubernetes.io/projected/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-kube-api-access-6gthl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.570986 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.571061 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.571114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.576140 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.583170 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.594111 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.603266 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gthl\" (UniqueName: \"kubernetes.io/projected/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-kube-api-access-6gthl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:03 crc kubenswrapper[4832]: I1204 06:33:03.664428 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:04 crc kubenswrapper[4832]: I1204 06:33:04.147348 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5152b11-80fa-4fd7-90df-132972214b18","Type":"ContainerStarted","Data":"204ccccb367c19309637810414d9807138eb2391ad007f858a82bcbdfaf4da12"} Dec 04 06:33:04 crc kubenswrapper[4832]: I1204 06:33:04.148049 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 04 06:33:04 crc kubenswrapper[4832]: I1204 06:33:04.151199 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65d1124e-f647-4d3c-b10e-c01691fb6c9b","Type":"ContainerStarted","Data":"7764720606e6bb774f5528a021f07c7f2de9e86ca06ff40b444a99c590f5a5a9"} Dec 04 06:33:04 crc kubenswrapper[4832]: I1204 06:33:04.152082 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:33:04 crc kubenswrapper[4832]: I1204 06:33:04.188517 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.188493538 podStartE2EDuration="37.188493538s" podCreationTimestamp="2025-12-04 06:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:33:04.179142258 +0000 UTC m=+1439.791959974" watchObservedRunningTime="2025-12-04 06:33:04.188493538 +0000 UTC m=+1439.801311244" Dec 04 06:33:04 crc kubenswrapper[4832]: I1204 06:33:04.206813 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.206793817 podStartE2EDuration="37.206793817s" podCreationTimestamp="2025-12-04 06:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:33:04.204816728 +0000 UTC m=+1439.817634434" watchObservedRunningTime="2025-12-04 06:33:04.206793817 +0000 UTC m=+1439.819611533" Dec 04 06:33:04 crc kubenswrapper[4832]: I1204 06:33:04.277418 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh"] Dec 04 06:33:04 crc kubenswrapper[4832]: W1204 06:33:04.284679 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f4cc7d6_382c_46c7_a422_d728d7d8aa19.slice/crio-b5562aef0a70856d6851e4883e257dd089b8625f57a3841a6b9f8a78544bead2 WatchSource:0}: Error finding container b5562aef0a70856d6851e4883e257dd089b8625f57a3841a6b9f8a78544bead2: Status 404 returned error can't find the container with id b5562aef0a70856d6851e4883e257dd089b8625f57a3841a6b9f8a78544bead2 Dec 04 06:33:05 crc kubenswrapper[4832]: I1204 06:33:05.162551 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" event={"ID":"2f4cc7d6-382c-46c7-a422-d728d7d8aa19","Type":"ContainerStarted","Data":"b5562aef0a70856d6851e4883e257dd089b8625f57a3841a6b9f8a78544bead2"} Dec 04 06:33:05 crc kubenswrapper[4832]: I1204 06:33:05.362876 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:33:05 crc kubenswrapper[4832]: I1204 06:33:05.363303 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:33:16 crc kubenswrapper[4832]: I1204 06:33:16.319936 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:33:17 crc kubenswrapper[4832]: I1204 06:33:17.341114 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" event={"ID":"2f4cc7d6-382c-46c7-a422-d728d7d8aa19","Type":"ContainerStarted","Data":"d1c2a6a52caf52a6d65a882a396f88d719f4d44c292dbd0dbc6636b6bc9aa77f"} Dec 04 06:33:17 crc kubenswrapper[4832]: I1204 06:33:17.366027 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" podStartSLOduration=2.335370094 podStartE2EDuration="14.366005308s" podCreationTimestamp="2025-12-04 06:33:03 +0000 UTC" firstStartedPulling="2025-12-04 06:33:04.286609164 +0000 UTC m=+1439.899426870" lastFinishedPulling="2025-12-04 06:33:16.317244378 +0000 UTC m=+1451.930062084" observedRunningTime="2025-12-04 06:33:17.355849489 +0000 UTC m=+1452.968667195" watchObservedRunningTime="2025-12-04 06:33:17.366005308 +0000 UTC m=+1452.978823014" Dec 04 06:33:18 crc kubenswrapper[4832]: I1204 06:33:18.146876 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 04 06:33:18 crc kubenswrapper[4832]: I1204 06:33:18.295631 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 04 06:33:18 crc kubenswrapper[4832]: I1204 06:33:18.303344 4832 scope.go:117] "RemoveContainer" containerID="50c4b8bc59996799ec0754cae7a6f82efda61b06d40f90fa21ecb25db8188717" Dec 04 06:33:28 crc kubenswrapper[4832]: I1204 06:33:28.499237 4832 generic.go:334] "Generic (PLEG): container finished" podID="2f4cc7d6-382c-46c7-a422-d728d7d8aa19" containerID="d1c2a6a52caf52a6d65a882a396f88d719f4d44c292dbd0dbc6636b6bc9aa77f" exitCode=0 Dec 04 06:33:28 crc kubenswrapper[4832]: I1204 06:33:28.499354 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" event={"ID":"2f4cc7d6-382c-46c7-a422-d728d7d8aa19","Type":"ContainerDied","Data":"d1c2a6a52caf52a6d65a882a396f88d719f4d44c292dbd0dbc6636b6bc9aa77f"} Dec 04 06:33:29 crc kubenswrapper[4832]: I1204 06:33:29.990438 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.033087 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-repo-setup-combined-ca-bundle\") pod \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.033180 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-ssh-key\") pod \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.033232 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-inventory\") pod \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.033264 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gthl\" (UniqueName: \"kubernetes.io/projected/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-kube-api-access-6gthl\") pod \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\" (UID: \"2f4cc7d6-382c-46c7-a422-d728d7d8aa19\") " Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.057121 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-kube-api-access-6gthl" (OuterVolumeSpecName: "kube-api-access-6gthl") pod "2f4cc7d6-382c-46c7-a422-d728d7d8aa19" (UID: "2f4cc7d6-382c-46c7-a422-d728d7d8aa19"). InnerVolumeSpecName "kube-api-access-6gthl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.058031 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2f4cc7d6-382c-46c7-a422-d728d7d8aa19" (UID: "2f4cc7d6-382c-46c7-a422-d728d7d8aa19"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.077323 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f4cc7d6-382c-46c7-a422-d728d7d8aa19" (UID: "2f4cc7d6-382c-46c7-a422-d728d7d8aa19"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.112487 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-inventory" (OuterVolumeSpecName: "inventory") pod "2f4cc7d6-382c-46c7-a422-d728d7d8aa19" (UID: "2f4cc7d6-382c-46c7-a422-d728d7d8aa19"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.135901 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.136001 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gthl\" (UniqueName: \"kubernetes.io/projected/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-kube-api-access-6gthl\") on node \"crc\" DevicePath \"\"" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.136019 4832 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.136031 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f4cc7d6-382c-46c7-a422-d728d7d8aa19-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.525417 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.525290 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh" event={"ID":"2f4cc7d6-382c-46c7-a422-d728d7d8aa19","Type":"ContainerDied","Data":"b5562aef0a70856d6851e4883e257dd089b8625f57a3841a6b9f8a78544bead2"} Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.544253 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5562aef0a70856d6851e4883e257dd089b8625f57a3841a6b9f8a78544bead2" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.724036 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn"] Dec 04 06:33:30 crc kubenswrapper[4832]: E1204 06:33:30.724470 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4cc7d6-382c-46c7-a422-d728d7d8aa19" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.724491 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4cc7d6-382c-46c7-a422-d728d7d8aa19" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.724758 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4cc7d6-382c-46c7-a422-d728d7d8aa19" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.725556 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.728030 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.728362 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.728431 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.730436 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.746271 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn"] Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.851792 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98f3d48c-b338-4b27-893d-f83a3e55ccd8-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dzcnn\" (UID: \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.852200 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98f3d48c-b338-4b27-893d-f83a3e55ccd8-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dzcnn\" (UID: \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.852537 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4nv6\" (UniqueName: \"kubernetes.io/projected/98f3d48c-b338-4b27-893d-f83a3e55ccd8-kube-api-access-g4nv6\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dzcnn\" (UID: \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.954184 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4nv6\" (UniqueName: \"kubernetes.io/projected/98f3d48c-b338-4b27-893d-f83a3e55ccd8-kube-api-access-g4nv6\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dzcnn\" (UID: \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.954704 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98f3d48c-b338-4b27-893d-f83a3e55ccd8-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dzcnn\" (UID: \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.954738 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98f3d48c-b338-4b27-893d-f83a3e55ccd8-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dzcnn\" (UID: \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.963118 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98f3d48c-b338-4b27-893d-f83a3e55ccd8-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dzcnn\" (UID: \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.963959 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98f3d48c-b338-4b27-893d-f83a3e55ccd8-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dzcnn\" (UID: \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:30 crc kubenswrapper[4832]: I1204 06:33:30.974146 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4nv6\" (UniqueName: \"kubernetes.io/projected/98f3d48c-b338-4b27-893d-f83a3e55ccd8-kube-api-access-g4nv6\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dzcnn\" (UID: \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:31 crc kubenswrapper[4832]: I1204 06:33:31.055561 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:32 crc kubenswrapper[4832]: I1204 06:33:31.651620 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn"] Dec 04 06:33:32 crc kubenswrapper[4832]: W1204 06:33:31.657609 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98f3d48c_b338_4b27_893d_f83a3e55ccd8.slice/crio-f2eef0ef8c4efd717e19a2a8c2b21fe1267c1fb332517990bf785bf142c9233e WatchSource:0}: Error finding container f2eef0ef8c4efd717e19a2a8c2b21fe1267c1fb332517990bf785bf142c9233e: Status 404 returned error can't find the container with id f2eef0ef8c4efd717e19a2a8c2b21fe1267c1fb332517990bf785bf142c9233e Dec 04 06:33:32 crc kubenswrapper[4832]: I1204 06:33:32.552530 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" event={"ID":"98f3d48c-b338-4b27-893d-f83a3e55ccd8","Type":"ContainerStarted","Data":"1ae508d68167365d27fe1560bfd8548de5f6077d31bec3cffbc58ce8c7401f4e"} Dec 04 06:33:32 crc kubenswrapper[4832]: I1204 06:33:32.552964 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" event={"ID":"98f3d48c-b338-4b27-893d-f83a3e55ccd8","Type":"ContainerStarted","Data":"f2eef0ef8c4efd717e19a2a8c2b21fe1267c1fb332517990bf785bf142c9233e"} Dec 04 06:33:32 crc kubenswrapper[4832]: I1204 06:33:32.575755 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" podStartSLOduration=2.2670550990000002 podStartE2EDuration="2.575739369s" podCreationTimestamp="2025-12-04 06:33:30 +0000 UTC" firstStartedPulling="2025-12-04 06:33:31.666560272 +0000 UTC m=+1467.279377978" lastFinishedPulling="2025-12-04 06:33:31.975244542 +0000 UTC m=+1467.588062248" observedRunningTime="2025-12-04 06:33:32.573080784 +0000 UTC m=+1468.185898490" watchObservedRunningTime="2025-12-04 06:33:32.575739369 +0000 UTC m=+1468.188557075" Dec 04 06:33:35 crc kubenswrapper[4832]: I1204 06:33:35.362525 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:33:35 crc kubenswrapper[4832]: I1204 06:33:35.362991 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:33:35 crc kubenswrapper[4832]: I1204 06:33:35.595583 4832 generic.go:334] "Generic (PLEG): container finished" podID="98f3d48c-b338-4b27-893d-f83a3e55ccd8" containerID="1ae508d68167365d27fe1560bfd8548de5f6077d31bec3cffbc58ce8c7401f4e" exitCode=0 Dec 04 06:33:35 crc kubenswrapper[4832]: I1204 06:33:35.595650 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" event={"ID":"98f3d48c-b338-4b27-893d-f83a3e55ccd8","Type":"ContainerDied","Data":"1ae508d68167365d27fe1560bfd8548de5f6077d31bec3cffbc58ce8c7401f4e"} Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.069769 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.115027 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4nv6\" (UniqueName: \"kubernetes.io/projected/98f3d48c-b338-4b27-893d-f83a3e55ccd8-kube-api-access-g4nv6\") pod \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\" (UID: \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\") " Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.115091 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98f3d48c-b338-4b27-893d-f83a3e55ccd8-inventory\") pod \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\" (UID: \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\") " Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.115122 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98f3d48c-b338-4b27-893d-f83a3e55ccd8-ssh-key\") pod \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\" (UID: \"98f3d48c-b338-4b27-893d-f83a3e55ccd8\") " Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.122577 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f3d48c-b338-4b27-893d-f83a3e55ccd8-kube-api-access-g4nv6" (OuterVolumeSpecName: "kube-api-access-g4nv6") pod "98f3d48c-b338-4b27-893d-f83a3e55ccd8" (UID: "98f3d48c-b338-4b27-893d-f83a3e55ccd8"). InnerVolumeSpecName "kube-api-access-g4nv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.143543 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f3d48c-b338-4b27-893d-f83a3e55ccd8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "98f3d48c-b338-4b27-893d-f83a3e55ccd8" (UID: "98f3d48c-b338-4b27-893d-f83a3e55ccd8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.144765 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f3d48c-b338-4b27-893d-f83a3e55ccd8-inventory" (OuterVolumeSpecName: "inventory") pod "98f3d48c-b338-4b27-893d-f83a3e55ccd8" (UID: "98f3d48c-b338-4b27-893d-f83a3e55ccd8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.216732 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4nv6\" (UniqueName: \"kubernetes.io/projected/98f3d48c-b338-4b27-893d-f83a3e55ccd8-kube-api-access-g4nv6\") on node \"crc\" DevicePath \"\"" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.217076 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98f3d48c-b338-4b27-893d-f83a3e55ccd8-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.217087 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98f3d48c-b338-4b27-893d-f83a3e55ccd8-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.629013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" event={"ID":"98f3d48c-b338-4b27-893d-f83a3e55ccd8","Type":"ContainerDied","Data":"f2eef0ef8c4efd717e19a2a8c2b21fe1267c1fb332517990bf785bf142c9233e"} Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.629053 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2eef0ef8c4efd717e19a2a8c2b21fe1267c1fb332517990bf785bf142c9233e" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.629068 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dzcnn" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.700875 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m"] Dec 04 06:33:37 crc kubenswrapper[4832]: E1204 06:33:37.701324 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f3d48c-b338-4b27-893d-f83a3e55ccd8" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.701363 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f3d48c-b338-4b27-893d-f83a3e55ccd8" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.701749 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f3d48c-b338-4b27-893d-f83a3e55ccd8" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.702644 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.714242 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m"] Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.717985 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.718315 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.718653 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.719378 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.829322 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.829416 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzbgz\" (UniqueName: \"kubernetes.io/projected/45d69295-db9b-4a70-a031-2e19abcf6be1-kube-api-access-hzbgz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.829474 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.829503 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.931349 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.931404 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzbgz\" (UniqueName: \"kubernetes.io/projected/45d69295-db9b-4a70-a031-2e19abcf6be1-kube-api-access-hzbgz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.931463 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.931490 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.936370 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.939221 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.939732 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:37 crc kubenswrapper[4832]: I1204 06:33:37.947766 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzbgz\" (UniqueName: \"kubernetes.io/projected/45d69295-db9b-4a70-a031-2e19abcf6be1-kube-api-access-hzbgz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:38 crc kubenswrapper[4832]: I1204 06:33:38.020932 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:33:38 crc kubenswrapper[4832]: I1204 06:33:38.573859 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m"] Dec 04 06:33:38 crc kubenswrapper[4832]: I1204 06:33:38.640865 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" event={"ID":"45d69295-db9b-4a70-a031-2e19abcf6be1","Type":"ContainerStarted","Data":"30714f2ad7d0a69eaf2c5bede7fff7d17b635488a3a28df27d818b4fc10acf4f"} Dec 04 06:33:39 crc kubenswrapper[4832]: I1204 06:33:39.653807 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" event={"ID":"45d69295-db9b-4a70-a031-2e19abcf6be1","Type":"ContainerStarted","Data":"af8f5e2451eaf146fe90df996a9e815c9b54245258dda530111c3de229645c04"} Dec 04 06:33:39 crc kubenswrapper[4832]: I1204 06:33:39.679655 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" podStartSLOduration=2.495573246 podStartE2EDuration="2.679635269s" podCreationTimestamp="2025-12-04 06:33:37 +0000 UTC" firstStartedPulling="2025-12-04 06:33:38.566120241 +0000 UTC m=+1474.178937947" lastFinishedPulling="2025-12-04 06:33:38.750182264 +0000 UTC m=+1474.362999970" observedRunningTime="2025-12-04 06:33:39.67601648 +0000 UTC m=+1475.288834186" watchObservedRunningTime="2025-12-04 06:33:39.679635269 +0000 UTC m=+1475.292452975" Dec 04 06:34:05 crc kubenswrapper[4832]: I1204 06:34:05.363225 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:34:05 crc kubenswrapper[4832]: I1204 06:34:05.364219 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:34:05 crc kubenswrapper[4832]: I1204 06:34:05.364328 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:34:05 crc kubenswrapper[4832]: I1204 06:34:05.365597 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 06:34:05 crc kubenswrapper[4832]: I1204 06:34:05.365677 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" gracePeriod=600 Dec 04 06:34:05 crc kubenswrapper[4832]: E1204 06:34:05.497875 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:34:06 crc kubenswrapper[4832]: I1204 06:34:06.071480 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" exitCode=0 Dec 04 06:34:06 crc kubenswrapper[4832]: I1204 06:34:06.071593 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5"} Dec 04 06:34:06 crc kubenswrapper[4832]: I1204 06:34:06.071979 4832 scope.go:117] "RemoveContainer" containerID="348e974629646d54fa0c54ee820cfb4880e34f9731c9b205d4b99c38588e1db7" Dec 04 06:34:06 crc kubenswrapper[4832]: I1204 06:34:06.074248 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:34:06 crc kubenswrapper[4832]: E1204 06:34:06.075562 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:34:18 crc kubenswrapper[4832]: I1204 06:34:18.485066 4832 scope.go:117] "RemoveContainer" containerID="a4fa05ed35c11a542a8d0eb9a657526f567af5481c9c946931f663f314dd361d" Dec 04 06:34:18 crc kubenswrapper[4832]: I1204 06:34:18.505733 4832 scope.go:117] "RemoveContainer" containerID="95432ea2ad169a209bd4a9070d45aebb8e572aaf8e25d63c2894ade7aa9bc66f" Dec 04 06:34:18 crc kubenswrapper[4832]: I1204 06:34:18.712680 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:34:18 crc kubenswrapper[4832]: E1204 06:34:18.713216 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:34:30 crc kubenswrapper[4832]: I1204 06:34:30.711817 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:34:30 crc kubenswrapper[4832]: E1204 06:34:30.713264 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:34:45 crc kubenswrapper[4832]: I1204 06:34:45.710738 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:34:45 crc kubenswrapper[4832]: E1204 06:34:45.711870 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:35:00 crc kubenswrapper[4832]: I1204 06:35:00.711517 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:35:00 crc kubenswrapper[4832]: E1204 06:35:00.713099 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:35:11 crc kubenswrapper[4832]: I1204 06:35:11.722356 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:35:11 crc kubenswrapper[4832]: E1204 06:35:11.725683 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:35:18 crc kubenswrapper[4832]: I1204 06:35:18.576663 4832 scope.go:117] "RemoveContainer" containerID="e28c2795fe29c25944c1bc1087a77388c1ad339c75e632aadfecaae4794156fe" Dec 04 06:35:18 crc kubenswrapper[4832]: I1204 06:35:18.605052 4832 scope.go:117] "RemoveContainer" containerID="f49089505384cdc408301e968915172e66d01a9f25abe81a105be2d4c27b604e" Dec 04 06:35:18 crc kubenswrapper[4832]: I1204 06:35:18.654026 4832 scope.go:117] "RemoveContainer" containerID="ce8bff225dacd20b8d02aed97dc869d6b47eea08e3c8f940389f1372a9a8fec0" Dec 04 06:35:18 crc kubenswrapper[4832]: I1204 06:35:18.705656 4832 scope.go:117] "RemoveContainer" containerID="2440bee35940df428a9a1a4587d644009856959b3f8cfa8fb06ad99ad10b7e33" Dec 04 06:35:18 crc kubenswrapper[4832]: I1204 06:35:18.771286 4832 scope.go:117] "RemoveContainer" containerID="a200f72b817a2525eff1d166330bf820654a903e28cce83bc5e70d422431c528" Dec 04 06:35:18 crc kubenswrapper[4832]: I1204 06:35:18.797276 4832 scope.go:117] "RemoveContainer" containerID="07761db0fd6cd535af8520f86f18797c62fc70757ffe4205a343fa48de717863" Dec 04 06:35:22 crc kubenswrapper[4832]: I1204 06:35:22.711324 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:35:22 crc kubenswrapper[4832]: E1204 06:35:22.712192 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:35:30 crc kubenswrapper[4832]: I1204 06:35:30.681262 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7dmtn"] Dec 04 06:35:30 crc kubenswrapper[4832]: I1204 06:35:30.687066 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:30 crc kubenswrapper[4832]: I1204 06:35:30.700218 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dmtn"] Dec 04 06:35:30 crc kubenswrapper[4832]: I1204 06:35:30.770560 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818dabb6-0cf6-44df-91ca-6cad2d3437c6-catalog-content\") pod \"community-operators-7dmtn\" (UID: \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\") " pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:30 crc kubenswrapper[4832]: I1204 06:35:30.770834 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnlxx\" (UniqueName: \"kubernetes.io/projected/818dabb6-0cf6-44df-91ca-6cad2d3437c6-kube-api-access-mnlxx\") pod \"community-operators-7dmtn\" (UID: \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\") " pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:30 crc kubenswrapper[4832]: I1204 06:35:30.771016 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818dabb6-0cf6-44df-91ca-6cad2d3437c6-utilities\") pod \"community-operators-7dmtn\" (UID: \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\") " pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:30 crc kubenswrapper[4832]: I1204 06:35:30.874254 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818dabb6-0cf6-44df-91ca-6cad2d3437c6-catalog-content\") pod \"community-operators-7dmtn\" (UID: \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\") " pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:30 crc kubenswrapper[4832]: I1204 06:35:30.874795 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnlxx\" (UniqueName: \"kubernetes.io/projected/818dabb6-0cf6-44df-91ca-6cad2d3437c6-kube-api-access-mnlxx\") pod \"community-operators-7dmtn\" (UID: \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\") " pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:30 crc kubenswrapper[4832]: I1204 06:35:30.874988 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818dabb6-0cf6-44df-91ca-6cad2d3437c6-utilities\") pod \"community-operators-7dmtn\" (UID: \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\") " pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:30 crc kubenswrapper[4832]: I1204 06:35:30.875027 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818dabb6-0cf6-44df-91ca-6cad2d3437c6-catalog-content\") pod \"community-operators-7dmtn\" (UID: \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\") " pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:30 crc kubenswrapper[4832]: I1204 06:35:30.875303 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818dabb6-0cf6-44df-91ca-6cad2d3437c6-utilities\") pod \"community-operators-7dmtn\" (UID: \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\") " pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:30 crc kubenswrapper[4832]: I1204 06:35:30.900296 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnlxx\" (UniqueName: \"kubernetes.io/projected/818dabb6-0cf6-44df-91ca-6cad2d3437c6-kube-api-access-mnlxx\") pod \"community-operators-7dmtn\" (UID: \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\") " pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:31 crc kubenswrapper[4832]: I1204 06:35:31.018510 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:31 crc kubenswrapper[4832]: I1204 06:35:31.645754 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dmtn"] Dec 04 06:35:31 crc kubenswrapper[4832]: I1204 06:35:31.976551 4832 generic.go:334] "Generic (PLEG): container finished" podID="818dabb6-0cf6-44df-91ca-6cad2d3437c6" containerID="5c770eefc52b1a3d1eec9cd0bbf82ccde73b6980ac340ae65654c02f46f4c625" exitCode=0 Dec 04 06:35:31 crc kubenswrapper[4832]: I1204 06:35:31.976665 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dmtn" event={"ID":"818dabb6-0cf6-44df-91ca-6cad2d3437c6","Type":"ContainerDied","Data":"5c770eefc52b1a3d1eec9cd0bbf82ccde73b6980ac340ae65654c02f46f4c625"} Dec 04 06:35:31 crc kubenswrapper[4832]: I1204 06:35:31.976871 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dmtn" event={"ID":"818dabb6-0cf6-44df-91ca-6cad2d3437c6","Type":"ContainerStarted","Data":"d6741a803c80618c4efa1d35d0a18750fb9449c28d4b78aa73955f29cd5ba2dd"} Dec 04 06:35:32 crc kubenswrapper[4832]: I1204 06:35:32.988705 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dmtn" event={"ID":"818dabb6-0cf6-44df-91ca-6cad2d3437c6","Type":"ContainerStarted","Data":"d430464e9fece4de3c2e72a9899ed537c191c6cfc64bc24e688f6f7f23416890"} Dec 04 06:35:33 crc kubenswrapper[4832]: I1204 06:35:33.710930 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:35:33 crc kubenswrapper[4832]: E1204 06:35:33.711333 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:35:33 crc kubenswrapper[4832]: I1204 06:35:33.998987 4832 generic.go:334] "Generic (PLEG): container finished" podID="818dabb6-0cf6-44df-91ca-6cad2d3437c6" containerID="d430464e9fece4de3c2e72a9899ed537c191c6cfc64bc24e688f6f7f23416890" exitCode=0 Dec 04 06:35:33 crc kubenswrapper[4832]: I1204 06:35:33.999055 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dmtn" event={"ID":"818dabb6-0cf6-44df-91ca-6cad2d3437c6","Type":"ContainerDied","Data":"d430464e9fece4de3c2e72a9899ed537c191c6cfc64bc24e688f6f7f23416890"} Dec 04 06:35:35 crc kubenswrapper[4832]: I1204 06:35:35.010437 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dmtn" event={"ID":"818dabb6-0cf6-44df-91ca-6cad2d3437c6","Type":"ContainerStarted","Data":"8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782"} Dec 04 06:35:35 crc kubenswrapper[4832]: I1204 06:35:35.027915 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7dmtn" podStartSLOduration=2.579051954 podStartE2EDuration="5.027895444s" podCreationTimestamp="2025-12-04 06:35:30 +0000 UTC" firstStartedPulling="2025-12-04 06:35:31.979941882 +0000 UTC m=+1587.592759588" lastFinishedPulling="2025-12-04 06:35:34.428785372 +0000 UTC m=+1590.041603078" observedRunningTime="2025-12-04 06:35:35.026210823 +0000 UTC m=+1590.639028599" watchObservedRunningTime="2025-12-04 06:35:35.027895444 +0000 UTC m=+1590.640713140" Dec 04 06:35:41 crc kubenswrapper[4832]: I1204 06:35:41.018727 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:41 crc kubenswrapper[4832]: I1204 06:35:41.019239 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:41 crc kubenswrapper[4832]: I1204 06:35:41.067213 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:41 crc kubenswrapper[4832]: I1204 06:35:41.127301 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:41 crc kubenswrapper[4832]: I1204 06:35:41.301670 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dmtn"] Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.091240 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7dmtn" podUID="818dabb6-0cf6-44df-91ca-6cad2d3437c6" containerName="registry-server" containerID="cri-o://8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782" gracePeriod=2 Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.587067 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.719182 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kjp6z"] Dec 04 06:35:43 crc kubenswrapper[4832]: E1204 06:35:43.719927 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818dabb6-0cf6-44df-91ca-6cad2d3437c6" containerName="extract-utilities" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.719947 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="818dabb6-0cf6-44df-91ca-6cad2d3437c6" containerName="extract-utilities" Dec 04 06:35:43 crc kubenswrapper[4832]: E1204 06:35:43.719967 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818dabb6-0cf6-44df-91ca-6cad2d3437c6" containerName="extract-content" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.719974 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="818dabb6-0cf6-44df-91ca-6cad2d3437c6" containerName="extract-content" Dec 04 06:35:43 crc kubenswrapper[4832]: E1204 06:35:43.720008 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818dabb6-0cf6-44df-91ca-6cad2d3437c6" containerName="registry-server" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.720016 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="818dabb6-0cf6-44df-91ca-6cad2d3437c6" containerName="registry-server" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.720200 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="818dabb6-0cf6-44df-91ca-6cad2d3437c6" containerName="registry-server" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.721667 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.730579 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjp6z"] Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.741516 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818dabb6-0cf6-44df-91ca-6cad2d3437c6-utilities\") pod \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\" (UID: \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\") " Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.741811 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnlxx\" (UniqueName: \"kubernetes.io/projected/818dabb6-0cf6-44df-91ca-6cad2d3437c6-kube-api-access-mnlxx\") pod \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\" (UID: \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\") " Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.741923 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818dabb6-0cf6-44df-91ca-6cad2d3437c6-catalog-content\") pod \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\" (UID: \"818dabb6-0cf6-44df-91ca-6cad2d3437c6\") " Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.759737 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/818dabb6-0cf6-44df-91ca-6cad2d3437c6-utilities" (OuterVolumeSpecName: "utilities") pod "818dabb6-0cf6-44df-91ca-6cad2d3437c6" (UID: "818dabb6-0cf6-44df-91ca-6cad2d3437c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.770611 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818dabb6-0cf6-44df-91ca-6cad2d3437c6-kube-api-access-mnlxx" (OuterVolumeSpecName: "kube-api-access-mnlxx") pod "818dabb6-0cf6-44df-91ca-6cad2d3437c6" (UID: "818dabb6-0cf6-44df-91ca-6cad2d3437c6"). InnerVolumeSpecName "kube-api-access-mnlxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.810024 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/818dabb6-0cf6-44df-91ca-6cad2d3437c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "818dabb6-0cf6-44df-91ca-6cad2d3437c6" (UID: "818dabb6-0cf6-44df-91ca-6cad2d3437c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.843736 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-utilities\") pod \"certified-operators-kjp6z\" (UID: \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\") " pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.843821 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79ld6\" (UniqueName: \"kubernetes.io/projected/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-kube-api-access-79ld6\") pod \"certified-operators-kjp6z\" (UID: \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\") " pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.843948 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-catalog-content\") pod \"certified-operators-kjp6z\" (UID: \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\") " pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.844124 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818dabb6-0cf6-44df-91ca-6cad2d3437c6-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.844145 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnlxx\" (UniqueName: \"kubernetes.io/projected/818dabb6-0cf6-44df-91ca-6cad2d3437c6-kube-api-access-mnlxx\") on node \"crc\" DevicePath \"\"" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.844160 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818dabb6-0cf6-44df-91ca-6cad2d3437c6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.945689 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-utilities\") pod \"certified-operators-kjp6z\" (UID: \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\") " pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.945758 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79ld6\" (UniqueName: \"kubernetes.io/projected/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-kube-api-access-79ld6\") pod \"certified-operators-kjp6z\" (UID: \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\") " pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.945841 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-catalog-content\") pod \"certified-operators-kjp6z\" (UID: \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\") " pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.946277 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-utilities\") pod \"certified-operators-kjp6z\" (UID: \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\") " pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.946427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-catalog-content\") pod \"certified-operators-kjp6z\" (UID: \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\") " pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:43 crc kubenswrapper[4832]: I1204 06:35:43.966620 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79ld6\" (UniqueName: \"kubernetes.io/projected/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-kube-api-access-79ld6\") pod \"certified-operators-kjp6z\" (UID: \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\") " pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.108860 4832 generic.go:334] "Generic (PLEG): container finished" podID="818dabb6-0cf6-44df-91ca-6cad2d3437c6" containerID="8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782" exitCode=0 Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.108930 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dmtn" Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.108932 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dmtn" event={"ID":"818dabb6-0cf6-44df-91ca-6cad2d3437c6","Type":"ContainerDied","Data":"8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782"} Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.108998 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dmtn" event={"ID":"818dabb6-0cf6-44df-91ca-6cad2d3437c6","Type":"ContainerDied","Data":"d6741a803c80618c4efa1d35d0a18750fb9449c28d4b78aa73955f29cd5ba2dd"} Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.109027 4832 scope.go:117] "RemoveContainer" containerID="8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782" Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.128719 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.137984 4832 scope.go:117] "RemoveContainer" containerID="d430464e9fece4de3c2e72a9899ed537c191c6cfc64bc24e688f6f7f23416890" Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.155282 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dmtn"] Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.172836 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7dmtn"] Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.186717 4832 scope.go:117] "RemoveContainer" containerID="5c770eefc52b1a3d1eec9cd0bbf82ccde73b6980ac340ae65654c02f46f4c625" Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.217151 4832 scope.go:117] "RemoveContainer" containerID="8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782" Dec 04 06:35:44 crc kubenswrapper[4832]: E1204 06:35:44.217795 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782\": container with ID starting with 8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782 not found: ID does not exist" containerID="8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782" Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.217839 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782"} err="failed to get container status \"8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782\": rpc error: code = NotFound desc = could not find container \"8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782\": container with ID starting with 8d1d1c5a1cf05d9f6dc10fd2467f2f694f8b9ced914f92d07c372e77b4db0782 not found: ID does not exist" Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.217867 4832 scope.go:117] "RemoveContainer" containerID="d430464e9fece4de3c2e72a9899ed537c191c6cfc64bc24e688f6f7f23416890" Dec 04 06:35:44 crc kubenswrapper[4832]: E1204 06:35:44.218358 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d430464e9fece4de3c2e72a9899ed537c191c6cfc64bc24e688f6f7f23416890\": container with ID starting with d430464e9fece4de3c2e72a9899ed537c191c6cfc64bc24e688f6f7f23416890 not found: ID does not exist" containerID="d430464e9fece4de3c2e72a9899ed537c191c6cfc64bc24e688f6f7f23416890" Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.218416 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d430464e9fece4de3c2e72a9899ed537c191c6cfc64bc24e688f6f7f23416890"} err="failed to get container status \"d430464e9fece4de3c2e72a9899ed537c191c6cfc64bc24e688f6f7f23416890\": rpc error: code = NotFound desc = could not find container \"d430464e9fece4de3c2e72a9899ed537c191c6cfc64bc24e688f6f7f23416890\": container with ID starting with d430464e9fece4de3c2e72a9899ed537c191c6cfc64bc24e688f6f7f23416890 not found: ID does not exist" Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.218446 4832 scope.go:117] "RemoveContainer" containerID="5c770eefc52b1a3d1eec9cd0bbf82ccde73b6980ac340ae65654c02f46f4c625" Dec 04 06:35:44 crc kubenswrapper[4832]: E1204 06:35:44.218811 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c770eefc52b1a3d1eec9cd0bbf82ccde73b6980ac340ae65654c02f46f4c625\": container with ID starting with 5c770eefc52b1a3d1eec9cd0bbf82ccde73b6980ac340ae65654c02f46f4c625 not found: ID does not exist" containerID="5c770eefc52b1a3d1eec9cd0bbf82ccde73b6980ac340ae65654c02f46f4c625" Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.218863 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c770eefc52b1a3d1eec9cd0bbf82ccde73b6980ac340ae65654c02f46f4c625"} err="failed to get container status \"5c770eefc52b1a3d1eec9cd0bbf82ccde73b6980ac340ae65654c02f46f4c625\": rpc error: code = NotFound desc = could not find container \"5c770eefc52b1a3d1eec9cd0bbf82ccde73b6980ac340ae65654c02f46f4c625\": container with ID starting with 5c770eefc52b1a3d1eec9cd0bbf82ccde73b6980ac340ae65654c02f46f4c625 not found: ID does not exist" Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.699925 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjp6z"] Dec 04 06:35:44 crc kubenswrapper[4832]: I1204 06:35:44.725980 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="818dabb6-0cf6-44df-91ca-6cad2d3437c6" path="/var/lib/kubelet/pods/818dabb6-0cf6-44df-91ca-6cad2d3437c6/volumes" Dec 04 06:35:45 crc kubenswrapper[4832]: I1204 06:35:45.121478 4832 generic.go:334] "Generic (PLEG): container finished" podID="2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" containerID="2af03bf01730735cd3e6e202731c369f3f34af12d499ebd8913e66f218c411b7" exitCode=0 Dec 04 06:35:45 crc kubenswrapper[4832]: I1204 06:35:45.121537 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjp6z" event={"ID":"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f","Type":"ContainerDied","Data":"2af03bf01730735cd3e6e202731c369f3f34af12d499ebd8913e66f218c411b7"} Dec 04 06:35:45 crc kubenswrapper[4832]: I1204 06:35:45.121862 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjp6z" event={"ID":"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f","Type":"ContainerStarted","Data":"f1016bafa85078da311d1bea20adc6de8e3c2b338f29d96fde60315226a412b0"} Dec 04 06:35:45 crc kubenswrapper[4832]: I1204 06:35:45.711699 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:35:45 crc kubenswrapper[4832]: E1204 06:35:45.712514 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:35:46 crc kubenswrapper[4832]: I1204 06:35:46.136312 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjp6z" event={"ID":"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f","Type":"ContainerStarted","Data":"c400c613e9315fd22fa56ce9104f97f85db45e6037c0cd6fbfc26f3e853733a4"} Dec 04 06:35:47 crc kubenswrapper[4832]: I1204 06:35:47.148846 4832 generic.go:334] "Generic (PLEG): container finished" podID="2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" containerID="c400c613e9315fd22fa56ce9104f97f85db45e6037c0cd6fbfc26f3e853733a4" exitCode=0 Dec 04 06:35:47 crc kubenswrapper[4832]: I1204 06:35:47.148954 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjp6z" event={"ID":"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f","Type":"ContainerDied","Data":"c400c613e9315fd22fa56ce9104f97f85db45e6037c0cd6fbfc26f3e853733a4"} Dec 04 06:35:48 crc kubenswrapper[4832]: I1204 06:35:48.164800 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjp6z" event={"ID":"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f","Type":"ContainerStarted","Data":"15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0"} Dec 04 06:35:48 crc kubenswrapper[4832]: I1204 06:35:48.189195 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kjp6z" podStartSLOduration=2.506962008 podStartE2EDuration="5.189175215s" podCreationTimestamp="2025-12-04 06:35:43 +0000 UTC" firstStartedPulling="2025-12-04 06:35:45.124172753 +0000 UTC m=+1600.736990469" lastFinishedPulling="2025-12-04 06:35:47.80638593 +0000 UTC m=+1603.419203676" observedRunningTime="2025-12-04 06:35:48.183586628 +0000 UTC m=+1603.796404404" watchObservedRunningTime="2025-12-04 06:35:48.189175215 +0000 UTC m=+1603.801992921" Dec 04 06:35:54 crc kubenswrapper[4832]: I1204 06:35:54.128886 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:54 crc kubenswrapper[4832]: I1204 06:35:54.130037 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:54 crc kubenswrapper[4832]: I1204 06:35:54.206794 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:54 crc kubenswrapper[4832]: I1204 06:35:54.309105 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:54 crc kubenswrapper[4832]: I1204 06:35:54.457729 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjp6z"] Dec 04 06:35:56 crc kubenswrapper[4832]: I1204 06:35:56.277295 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kjp6z" podUID="2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" containerName="registry-server" containerID="cri-o://15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0" gracePeriod=2 Dec 04 06:35:56 crc kubenswrapper[4832]: I1204 06:35:56.765202 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:56 crc kubenswrapper[4832]: I1204 06:35:56.849602 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79ld6\" (UniqueName: \"kubernetes.io/projected/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-kube-api-access-79ld6\") pod \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\" (UID: \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\") " Dec 04 06:35:56 crc kubenswrapper[4832]: I1204 06:35:56.849805 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-catalog-content\") pod \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\" (UID: \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\") " Dec 04 06:35:56 crc kubenswrapper[4832]: I1204 06:35:56.850108 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-utilities\") pod \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\" (UID: \"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f\") " Dec 04 06:35:56 crc kubenswrapper[4832]: I1204 06:35:56.851284 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-utilities" (OuterVolumeSpecName: "utilities") pod "2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" (UID: "2d8106f9-dfdf-4dd1-a2d6-fd14d842524f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:35:56 crc kubenswrapper[4832]: I1204 06:35:56.853254 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:35:56 crc kubenswrapper[4832]: I1204 06:35:56.857726 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-kube-api-access-79ld6" (OuterVolumeSpecName: "kube-api-access-79ld6") pod "2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" (UID: "2d8106f9-dfdf-4dd1-a2d6-fd14d842524f"). InnerVolumeSpecName "kube-api-access-79ld6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:35:56 crc kubenswrapper[4832]: I1204 06:35:56.955873 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79ld6\" (UniqueName: \"kubernetes.io/projected/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-kube-api-access-79ld6\") on node \"crc\" DevicePath \"\"" Dec 04 06:35:56 crc kubenswrapper[4832]: I1204 06:35:56.972570 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" (UID: "2d8106f9-dfdf-4dd1-a2d6-fd14d842524f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.057957 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.288199 4832 generic.go:334] "Generic (PLEG): container finished" podID="2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" containerID="15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0" exitCode=0 Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.288267 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjp6z" Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.288257 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjp6z" event={"ID":"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f","Type":"ContainerDied","Data":"15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0"} Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.288785 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjp6z" event={"ID":"2d8106f9-dfdf-4dd1-a2d6-fd14d842524f","Type":"ContainerDied","Data":"f1016bafa85078da311d1bea20adc6de8e3c2b338f29d96fde60315226a412b0"} Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.288809 4832 scope.go:117] "RemoveContainer" containerID="15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0" Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.328236 4832 scope.go:117] "RemoveContainer" containerID="c400c613e9315fd22fa56ce9104f97f85db45e6037c0cd6fbfc26f3e853733a4" Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.337053 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjp6z"] Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.351042 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kjp6z"] Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.367862 4832 scope.go:117] "RemoveContainer" containerID="2af03bf01730735cd3e6e202731c369f3f34af12d499ebd8913e66f218c411b7" Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.405152 4832 scope.go:117] "RemoveContainer" containerID="15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0" Dec 04 06:35:57 crc kubenswrapper[4832]: E1204 06:35:57.407903 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0\": container with ID starting with 15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0 not found: ID does not exist" containerID="15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0" Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.408413 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0"} err="failed to get container status \"15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0\": rpc error: code = NotFound desc = could not find container \"15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0\": container with ID starting with 15db8a166f28ed63269024fdb624b72981dbbb2729a30b6b0936cf595b5116e0 not found: ID does not exist" Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.408599 4832 scope.go:117] "RemoveContainer" containerID="c400c613e9315fd22fa56ce9104f97f85db45e6037c0cd6fbfc26f3e853733a4" Dec 04 06:35:57 crc kubenswrapper[4832]: E1204 06:35:57.416674 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c400c613e9315fd22fa56ce9104f97f85db45e6037c0cd6fbfc26f3e853733a4\": container with ID starting with c400c613e9315fd22fa56ce9104f97f85db45e6037c0cd6fbfc26f3e853733a4 not found: ID does not exist" containerID="c400c613e9315fd22fa56ce9104f97f85db45e6037c0cd6fbfc26f3e853733a4" Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.417094 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c400c613e9315fd22fa56ce9104f97f85db45e6037c0cd6fbfc26f3e853733a4"} err="failed to get container status \"c400c613e9315fd22fa56ce9104f97f85db45e6037c0cd6fbfc26f3e853733a4\": rpc error: code = NotFound desc = could not find container \"c400c613e9315fd22fa56ce9104f97f85db45e6037c0cd6fbfc26f3e853733a4\": container with ID starting with c400c613e9315fd22fa56ce9104f97f85db45e6037c0cd6fbfc26f3e853733a4 not found: ID does not exist" Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.417222 4832 scope.go:117] "RemoveContainer" containerID="2af03bf01730735cd3e6e202731c369f3f34af12d499ebd8913e66f218c411b7" Dec 04 06:35:57 crc kubenswrapper[4832]: E1204 06:35:57.417876 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af03bf01730735cd3e6e202731c369f3f34af12d499ebd8913e66f218c411b7\": container with ID starting with 2af03bf01730735cd3e6e202731c369f3f34af12d499ebd8913e66f218c411b7 not found: ID does not exist" containerID="2af03bf01730735cd3e6e202731c369f3f34af12d499ebd8913e66f218c411b7" Dec 04 06:35:57 crc kubenswrapper[4832]: I1204 06:35:57.417988 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af03bf01730735cd3e6e202731c369f3f34af12d499ebd8913e66f218c411b7"} err="failed to get container status \"2af03bf01730735cd3e6e202731c369f3f34af12d499ebd8913e66f218c411b7\": rpc error: code = NotFound desc = could not find container \"2af03bf01730735cd3e6e202731c369f3f34af12d499ebd8913e66f218c411b7\": container with ID starting with 2af03bf01730735cd3e6e202731c369f3f34af12d499ebd8913e66f218c411b7 not found: ID does not exist" Dec 04 06:35:58 crc kubenswrapper[4832]: I1204 06:35:58.710759 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:35:58 crc kubenswrapper[4832]: E1204 06:35:58.711141 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:35:58 crc kubenswrapper[4832]: I1204 06:35:58.722619 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" path="/var/lib/kubelet/pods/2d8106f9-dfdf-4dd1-a2d6-fd14d842524f/volumes" Dec 04 06:36:12 crc kubenswrapper[4832]: I1204 06:36:12.711136 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:36:12 crc kubenswrapper[4832]: E1204 06:36:12.712182 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:36:18 crc kubenswrapper[4832]: I1204 06:36:18.909654 4832 scope.go:117] "RemoveContainer" containerID="df40bca24b59ffe8c6fdd28e1d20e7b38dd1e490517c7cafd53cd998aec483bd" Dec 04 06:36:18 crc kubenswrapper[4832]: I1204 06:36:18.931560 4832 scope.go:117] "RemoveContainer" containerID="e46112f4d6393a4e8349cc06399fa2a6ad2cfc746361f128236fb14a24abfa32" Dec 04 06:36:18 crc kubenswrapper[4832]: I1204 06:36:18.952342 4832 scope.go:117] "RemoveContainer" containerID="c03e7beaaf0ef2673f9c8285e093777c6a352bdd788f4b95d5a1eeba5f902f39" Dec 04 06:36:18 crc kubenswrapper[4832]: I1204 06:36:18.976022 4832 scope.go:117] "RemoveContainer" containerID="bb5abae526c4825621682ec2d4d83e537c1384cf66eaf0acd55ad299cd6afcd8" Dec 04 06:36:18 crc kubenswrapper[4832]: I1204 06:36:18.997878 4832 scope.go:117] "RemoveContainer" containerID="7b157a75e760ff4fef7d01c3e7c9189a52f63a723d17de390f90d264e0babe2a" Dec 04 06:36:26 crc kubenswrapper[4832]: I1204 06:36:26.711051 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:36:26 crc kubenswrapper[4832]: E1204 06:36:26.711798 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:36:37 crc kubenswrapper[4832]: I1204 06:36:37.711573 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:36:37 crc kubenswrapper[4832]: E1204 06:36:37.712710 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:36:43 crc kubenswrapper[4832]: I1204 06:36:43.809883 4832 generic.go:334] "Generic (PLEG): container finished" podID="45d69295-db9b-4a70-a031-2e19abcf6be1" containerID="af8f5e2451eaf146fe90df996a9e815c9b54245258dda530111c3de229645c04" exitCode=0 Dec 04 06:36:43 crc kubenswrapper[4832]: I1204 06:36:43.810009 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" event={"ID":"45d69295-db9b-4a70-a031-2e19abcf6be1","Type":"ContainerDied","Data":"af8f5e2451eaf146fe90df996a9e815c9b54245258dda530111c3de229645c04"} Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.230000 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.234627 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-ssh-key\") pod \"45d69295-db9b-4a70-a031-2e19abcf6be1\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.234832 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-inventory\") pod \"45d69295-db9b-4a70-a031-2e19abcf6be1\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.234878 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzbgz\" (UniqueName: \"kubernetes.io/projected/45d69295-db9b-4a70-a031-2e19abcf6be1-kube-api-access-hzbgz\") pod \"45d69295-db9b-4a70-a031-2e19abcf6be1\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.234955 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-bootstrap-combined-ca-bundle\") pod \"45d69295-db9b-4a70-a031-2e19abcf6be1\" (UID: \"45d69295-db9b-4a70-a031-2e19abcf6be1\") " Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.243557 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "45d69295-db9b-4a70-a031-2e19abcf6be1" (UID: "45d69295-db9b-4a70-a031-2e19abcf6be1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.243774 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d69295-db9b-4a70-a031-2e19abcf6be1-kube-api-access-hzbgz" (OuterVolumeSpecName: "kube-api-access-hzbgz") pod "45d69295-db9b-4a70-a031-2e19abcf6be1" (UID: "45d69295-db9b-4a70-a031-2e19abcf6be1"). InnerVolumeSpecName "kube-api-access-hzbgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.280600 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-inventory" (OuterVolumeSpecName: "inventory") pod "45d69295-db9b-4a70-a031-2e19abcf6be1" (UID: "45d69295-db9b-4a70-a031-2e19abcf6be1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.281034 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "45d69295-db9b-4a70-a031-2e19abcf6be1" (UID: "45d69295-db9b-4a70-a031-2e19abcf6be1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.337597 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzbgz\" (UniqueName: \"kubernetes.io/projected/45d69295-db9b-4a70-a031-2e19abcf6be1-kube-api-access-hzbgz\") on node \"crc\" DevicePath \"\"" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.337981 4832 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.337994 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.338006 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d69295-db9b-4a70-a031-2e19abcf6be1-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.828190 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" event={"ID":"45d69295-db9b-4a70-a031-2e19abcf6be1","Type":"ContainerDied","Data":"30714f2ad7d0a69eaf2c5bede7fff7d17b635488a3a28df27d818b4fc10acf4f"} Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.828233 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30714f2ad7d0a69eaf2c5bede7fff7d17b635488a3a28df27d818b4fc10acf4f" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.828304 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.917126 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5"] Dec 04 06:36:45 crc kubenswrapper[4832]: E1204 06:36:45.917599 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" containerName="registry-server" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.917620 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" containerName="registry-server" Dec 04 06:36:45 crc kubenswrapper[4832]: E1204 06:36:45.917632 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" containerName="extract-content" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.917640 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" containerName="extract-content" Dec 04 06:36:45 crc kubenswrapper[4832]: E1204 06:36:45.917659 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d69295-db9b-4a70-a031-2e19abcf6be1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.917666 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d69295-db9b-4a70-a031-2e19abcf6be1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 04 06:36:45 crc kubenswrapper[4832]: E1204 06:36:45.917677 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" containerName="extract-utilities" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.917683 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" containerName="extract-utilities" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.917865 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8106f9-dfdf-4dd1-a2d6-fd14d842524f" containerName="registry-server" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.917899 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d69295-db9b-4a70-a031-2e19abcf6be1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.918606 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.920431 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.920648 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.920711 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.920815 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.933234 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5"] Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.950544 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a88a60b4-19c2-4ef9-b586-2b6733219e7a-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t96k5\" (UID: \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.950591 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a88a60b4-19c2-4ef9-b586-2b6733219e7a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t96k5\" (UID: \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:36:45 crc kubenswrapper[4832]: I1204 06:36:45.950835 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6644s\" (UniqueName: \"kubernetes.io/projected/a88a60b4-19c2-4ef9-b586-2b6733219e7a-kube-api-access-6644s\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t96k5\" (UID: \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:36:46 crc kubenswrapper[4832]: I1204 06:36:46.052862 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6644s\" (UniqueName: \"kubernetes.io/projected/a88a60b4-19c2-4ef9-b586-2b6733219e7a-kube-api-access-6644s\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t96k5\" (UID: \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:36:46 crc kubenswrapper[4832]: I1204 06:36:46.052932 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a88a60b4-19c2-4ef9-b586-2b6733219e7a-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t96k5\" (UID: \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:36:46 crc kubenswrapper[4832]: I1204 06:36:46.052960 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a88a60b4-19c2-4ef9-b586-2b6733219e7a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t96k5\" (UID: \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:36:46 crc kubenswrapper[4832]: I1204 06:36:46.057272 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a88a60b4-19c2-4ef9-b586-2b6733219e7a-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t96k5\" (UID: \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:36:46 crc kubenswrapper[4832]: I1204 06:36:46.057573 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a88a60b4-19c2-4ef9-b586-2b6733219e7a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t96k5\" (UID: \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:36:46 crc kubenswrapper[4832]: I1204 06:36:46.084457 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6644s\" (UniqueName: \"kubernetes.io/projected/a88a60b4-19c2-4ef9-b586-2b6733219e7a-kube-api-access-6644s\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t96k5\" (UID: \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:36:46 crc kubenswrapper[4832]: I1204 06:36:46.248010 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:36:46 crc kubenswrapper[4832]: I1204 06:36:46.753600 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5"] Dec 04 06:36:46 crc kubenswrapper[4832]: I1204 06:36:46.756751 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 06:36:46 crc kubenswrapper[4832]: I1204 06:36:46.838251 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" event={"ID":"a88a60b4-19c2-4ef9-b586-2b6733219e7a","Type":"ContainerStarted","Data":"5f037e2dc65699bf3191843838829049a658f1d8b26450673ba7c853525ab1a9"} Dec 04 06:36:47 crc kubenswrapper[4832]: I1204 06:36:47.848512 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" event={"ID":"a88a60b4-19c2-4ef9-b586-2b6733219e7a","Type":"ContainerStarted","Data":"acd4f36c884d5ada22451c19317bdf69379b88c305f9edda865975c4523bb91d"} Dec 04 06:36:47 crc kubenswrapper[4832]: I1204 06:36:47.873903 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" podStartSLOduration=2.680870845 podStartE2EDuration="2.873886117s" podCreationTimestamp="2025-12-04 06:36:45 +0000 UTC" firstStartedPulling="2025-12-04 06:36:46.756304276 +0000 UTC m=+1662.369121982" lastFinishedPulling="2025-12-04 06:36:46.949319548 +0000 UTC m=+1662.562137254" observedRunningTime="2025-12-04 06:36:47.86432098 +0000 UTC m=+1663.477138686" watchObservedRunningTime="2025-12-04 06:36:47.873886117 +0000 UTC m=+1663.486703823" Dec 04 06:36:48 crc kubenswrapper[4832]: I1204 06:36:48.710738 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:36:48 crc kubenswrapper[4832]: E1204 06:36:48.711018 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:37:03 crc kubenswrapper[4832]: I1204 06:37:03.711866 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:37:03 crc kubenswrapper[4832]: E1204 06:37:03.713036 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:37:18 crc kubenswrapper[4832]: I1204 06:37:18.710641 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:37:18 crc kubenswrapper[4832]: E1204 06:37:18.711465 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:37:19 crc kubenswrapper[4832]: I1204 06:37:19.077557 4832 scope.go:117] "RemoveContainer" containerID="99a472770723ce78feca4fca890f726fadbbef1a8a4cdf9763cb31dd7c775372" Dec 04 06:37:19 crc kubenswrapper[4832]: I1204 06:37:19.137279 4832 scope.go:117] "RemoveContainer" containerID="cb79c867a37393e6ba45915b2d336c72900bb45e94b0438311b411ba42c1eb69" Dec 04 06:37:19 crc kubenswrapper[4832]: I1204 06:37:19.164104 4832 scope.go:117] "RemoveContainer" containerID="f88ab60c0d3357807f4ca3054dc6bd5c1e2a542cb22a29df519a0b2e1c052acb" Dec 04 06:37:19 crc kubenswrapper[4832]: I1204 06:37:19.209248 4832 scope.go:117] "RemoveContainer" containerID="703d2eb9413661709caa8c6ef3aac10e5c19c14cfb18e345a582b4db1022c1bd" Dec 04 06:37:19 crc kubenswrapper[4832]: I1204 06:37:19.239341 4832 scope.go:117] "RemoveContainer" containerID="dc3c859747828abe19da5534685395d060a1e6e40dfc87c6618a797e8f59d384" Dec 04 06:37:32 crc kubenswrapper[4832]: I1204 06:37:32.711140 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:37:32 crc kubenswrapper[4832]: E1204 06:37:32.713316 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:37:43 crc kubenswrapper[4832]: I1204 06:37:43.711204 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:37:43 crc kubenswrapper[4832]: E1204 06:37:43.711968 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:37:44 crc kubenswrapper[4832]: I1204 06:37:44.049764 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-r66tn"] Dec 04 06:37:44 crc kubenswrapper[4832]: I1204 06:37:44.070083 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e659-account-create-update-dr4hp"] Dec 04 06:37:44 crc kubenswrapper[4832]: I1204 06:37:44.084737 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e659-account-create-update-dr4hp"] Dec 04 06:37:44 crc kubenswrapper[4832]: I1204 06:37:44.095411 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-r66tn"] Dec 04 06:37:44 crc kubenswrapper[4832]: I1204 06:37:44.725547 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3e1243-4bec-4c31-9bf8-1d7619986a47" path="/var/lib/kubelet/pods/1e3e1243-4bec-4c31-9bf8-1d7619986a47/volumes" Dec 04 06:37:44 crc kubenswrapper[4832]: I1204 06:37:44.726338 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa677da2-7506-423d-9889-cfd73be70e99" path="/var/lib/kubelet/pods/aa677da2-7506-423d-9889-cfd73be70e99/volumes" Dec 04 06:37:50 crc kubenswrapper[4832]: I1204 06:37:50.029496 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-5w6dd"] Dec 04 06:37:50 crc kubenswrapper[4832]: I1204 06:37:50.045665 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a629-account-create-update-5cjb2"] Dec 04 06:37:50 crc kubenswrapper[4832]: I1204 06:37:50.056667 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-5w6dd"] Dec 04 06:37:50 crc kubenswrapper[4832]: I1204 06:37:50.064530 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a629-account-create-update-5cjb2"] Dec 04 06:37:50 crc kubenswrapper[4832]: I1204 06:37:50.722143 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c65d67-949b-41ec-a47a-d9b7959e827f" path="/var/lib/kubelet/pods/20c65d67-949b-41ec-a47a-d9b7959e827f/volumes" Dec 04 06:37:50 crc kubenswrapper[4832]: I1204 06:37:50.723190 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78ab66e-ee59-443b-88a1-72bec5167698" path="/var/lib/kubelet/pods/f78ab66e-ee59-443b-88a1-72bec5167698/volumes" Dec 04 06:37:54 crc kubenswrapper[4832]: I1204 06:37:54.032403 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9g9d7"] Dec 04 06:37:54 crc kubenswrapper[4832]: I1204 06:37:54.044276 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-722d-account-create-update-hps7x"] Dec 04 06:37:54 crc kubenswrapper[4832]: I1204 06:37:54.055815 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9g9d7"] Dec 04 06:37:54 crc kubenswrapper[4832]: I1204 06:37:54.064990 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-722d-account-create-update-hps7x"] Dec 04 06:37:54 crc kubenswrapper[4832]: I1204 06:37:54.724841 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2155b199-49bd-41f9-9253-dd3f6d786bdb" path="/var/lib/kubelet/pods/2155b199-49bd-41f9-9253-dd3f6d786bdb/volumes" Dec 04 06:37:54 crc kubenswrapper[4832]: I1204 06:37:54.725719 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2cca739-d2fb-4f7e-aec3-89d3bf4d998b" path="/var/lib/kubelet/pods/d2cca739-d2fb-4f7e-aec3-89d3bf4d998b/volumes" Dec 04 06:37:58 crc kubenswrapper[4832]: I1204 06:37:58.710124 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:37:58 crc kubenswrapper[4832]: E1204 06:37:58.710990 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:38:13 crc kubenswrapper[4832]: I1204 06:38:13.710837 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:38:13 crc kubenswrapper[4832]: E1204 06:38:13.712082 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:38:16 crc kubenswrapper[4832]: I1204 06:38:16.064950 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-n6xzn"] Dec 04 06:38:16 crc kubenswrapper[4832]: I1204 06:38:16.077277 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-n6xzn"] Dec 04 06:38:16 crc kubenswrapper[4832]: I1204 06:38:16.729847 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a717aed-76c5-4d65-8b4e-62bc86503f2d" path="/var/lib/kubelet/pods/3a717aed-76c5-4d65-8b4e-62bc86503f2d/volumes" Dec 04 06:38:18 crc kubenswrapper[4832]: I1204 06:38:18.792623 4832 generic.go:334] "Generic (PLEG): container finished" podID="a88a60b4-19c2-4ef9-b586-2b6733219e7a" containerID="acd4f36c884d5ada22451c19317bdf69379b88c305f9edda865975c4523bb91d" exitCode=0 Dec 04 06:38:18 crc kubenswrapper[4832]: I1204 06:38:18.793310 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" event={"ID":"a88a60b4-19c2-4ef9-b586-2b6733219e7a","Type":"ContainerDied","Data":"acd4f36c884d5ada22451c19317bdf69379b88c305f9edda865975c4523bb91d"} Dec 04 06:38:19 crc kubenswrapper[4832]: I1204 06:38:19.306881 4832 scope.go:117] "RemoveContainer" containerID="257e15da5acf5e54dbe3cb0e345d79ce07a92c55b24a13c57254f4701723df4d" Dec 04 06:38:19 crc kubenswrapper[4832]: I1204 06:38:19.349943 4832 scope.go:117] "RemoveContainer" containerID="4ada567d7943587b983fbb0a1839473fff93aa668f03edc04b3ec401bdb14b19" Dec 04 06:38:19 crc kubenswrapper[4832]: I1204 06:38:19.379949 4832 scope.go:117] "RemoveContainer" containerID="ea82e158668bdfc3f97696bdd40f14ba60a710aebeac8b230e72b7b930a8cbe2" Dec 04 06:38:19 crc kubenswrapper[4832]: I1204 06:38:19.412831 4832 scope.go:117] "RemoveContainer" containerID="a62149d52621256525ca365a3ec543c430258f0a65c712fbce3f1d19b4bfdff8" Dec 04 06:38:19 crc kubenswrapper[4832]: I1204 06:38:19.455383 4832 scope.go:117] "RemoveContainer" containerID="6badc3dd0b21274b852f80e49c4cb1e042c8eb10c3d6806fea01c605484e2442" Dec 04 06:38:19 crc kubenswrapper[4832]: I1204 06:38:19.503653 4832 scope.go:117] "RemoveContainer" containerID="b37d4cf9152a03efbf82c400d4f21b373603535e0c5a1a0097914a404692c017" Dec 04 06:38:19 crc kubenswrapper[4832]: I1204 06:38:19.545094 4832 scope.go:117] "RemoveContainer" containerID="0b4874334a0366544fd6b93e48d3349541599526c0d7d7af447f93e9dd184deb" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.216823 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.257183 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a88a60b4-19c2-4ef9-b586-2b6733219e7a-ssh-key\") pod \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\" (UID: \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\") " Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.257422 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a88a60b4-19c2-4ef9-b586-2b6733219e7a-inventory\") pod \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\" (UID: \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\") " Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.257446 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6644s\" (UniqueName: \"kubernetes.io/projected/a88a60b4-19c2-4ef9-b586-2b6733219e7a-kube-api-access-6644s\") pod \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\" (UID: \"a88a60b4-19c2-4ef9-b586-2b6733219e7a\") " Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.264983 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a88a60b4-19c2-4ef9-b586-2b6733219e7a-kube-api-access-6644s" (OuterVolumeSpecName: "kube-api-access-6644s") pod "a88a60b4-19c2-4ef9-b586-2b6733219e7a" (UID: "a88a60b4-19c2-4ef9-b586-2b6733219e7a"). InnerVolumeSpecName "kube-api-access-6644s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.289781 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88a60b4-19c2-4ef9-b586-2b6733219e7a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a88a60b4-19c2-4ef9-b586-2b6733219e7a" (UID: "a88a60b4-19c2-4ef9-b586-2b6733219e7a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.297385 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88a60b4-19c2-4ef9-b586-2b6733219e7a-inventory" (OuterVolumeSpecName: "inventory") pod "a88a60b4-19c2-4ef9-b586-2b6733219e7a" (UID: "a88a60b4-19c2-4ef9-b586-2b6733219e7a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.360232 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a88a60b4-19c2-4ef9-b586-2b6733219e7a-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.360266 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6644s\" (UniqueName: \"kubernetes.io/projected/a88a60b4-19c2-4ef9-b586-2b6733219e7a-kube-api-access-6644s\") on node \"crc\" DevicePath \"\"" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.360277 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a88a60b4-19c2-4ef9-b586-2b6733219e7a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.817587 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" event={"ID":"a88a60b4-19c2-4ef9-b586-2b6733219e7a","Type":"ContainerDied","Data":"5f037e2dc65699bf3191843838829049a658f1d8b26450673ba7c853525ab1a9"} Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.817637 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f037e2dc65699bf3191843838829049a658f1d8b26450673ba7c853525ab1a9" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.817677 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t96k5" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.939235 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx"] Dec 04 06:38:20 crc kubenswrapper[4832]: E1204 06:38:20.939819 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88a60b4-19c2-4ef9-b586-2b6733219e7a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.939843 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88a60b4-19c2-4ef9-b586-2b6733219e7a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.940128 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a88a60b4-19c2-4ef9-b586-2b6733219e7a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.941033 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.947001 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.947302 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.947701 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.951416 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.959828 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx"] Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.969592 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsxkb\" (UniqueName: \"kubernetes.io/projected/4fdaa066-59c4-4491-961c-d72bb1a75243-kube-api-access-rsxkb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx\" (UID: \"4fdaa066-59c4-4491-961c-d72bb1a75243\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.969678 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdaa066-59c4-4491-961c-d72bb1a75243-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx\" (UID: \"4fdaa066-59c4-4491-961c-d72bb1a75243\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:38:20 crc kubenswrapper[4832]: I1204 06:38:20.969820 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4fdaa066-59c4-4491-961c-d72bb1a75243-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx\" (UID: \"4fdaa066-59c4-4491-961c-d72bb1a75243\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.059901 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2581-account-create-update-p9dgq"] Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.067431 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rxzc6"] Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.071887 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsxkb\" (UniqueName: \"kubernetes.io/projected/4fdaa066-59c4-4491-961c-d72bb1a75243-kube-api-access-rsxkb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx\" (UID: \"4fdaa066-59c4-4491-961c-d72bb1a75243\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.071971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdaa066-59c4-4491-961c-d72bb1a75243-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx\" (UID: \"4fdaa066-59c4-4491-961c-d72bb1a75243\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.072040 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4fdaa066-59c4-4491-961c-d72bb1a75243-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx\" (UID: \"4fdaa066-59c4-4491-961c-d72bb1a75243\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.079020 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4fdaa066-59c4-4491-961c-d72bb1a75243-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx\" (UID: \"4fdaa066-59c4-4491-961c-d72bb1a75243\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.090166 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-81b7-account-create-update-qwfh6"] Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.092852 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdaa066-59c4-4491-961c-d72bb1a75243-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx\" (UID: \"4fdaa066-59c4-4491-961c-d72bb1a75243\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.115037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsxkb\" (UniqueName: \"kubernetes.io/projected/4fdaa066-59c4-4491-961c-d72bb1a75243-kube-api-access-rsxkb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx\" (UID: \"4fdaa066-59c4-4491-961c-d72bb1a75243\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.118232 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-93d4-account-create-update-9swbr"] Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.133688 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-52rzl"] Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.140535 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rxzc6"] Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.147006 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2581-account-create-update-p9dgq"] Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.161827 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-93d4-account-create-update-9swbr"] Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.172798 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-81b7-account-create-update-qwfh6"] Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.179362 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-52rzl"] Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.258133 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.772005 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx"] Dec 04 06:38:21 crc kubenswrapper[4832]: I1204 06:38:21.827885 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" event={"ID":"4fdaa066-59c4-4491-961c-d72bb1a75243","Type":"ContainerStarted","Data":"20ae26ad25606b95dfb39812914bbcf61768ca03b080e032cada456845f67c72"} Dec 04 06:38:22 crc kubenswrapper[4832]: I1204 06:38:22.726863 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="239e8321-436b-4abb-8d3e-9e9dade5f5dd" path="/var/lib/kubelet/pods/239e8321-436b-4abb-8d3e-9e9dade5f5dd/volumes" Dec 04 06:38:22 crc kubenswrapper[4832]: I1204 06:38:22.727851 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97504893-0aed-473e-8297-aef920fc6503" path="/var/lib/kubelet/pods/97504893-0aed-473e-8297-aef920fc6503/volumes" Dec 04 06:38:22 crc kubenswrapper[4832]: I1204 06:38:22.728525 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a78092-fac6-45b5-96a8-acfc47ff879e" path="/var/lib/kubelet/pods/97a78092-fac6-45b5-96a8-acfc47ff879e/volumes" Dec 04 06:38:22 crc kubenswrapper[4832]: I1204 06:38:22.730560 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9986d0ac-da6a-44f4-be0b-6d4009c23176" path="/var/lib/kubelet/pods/9986d0ac-da6a-44f4-be0b-6d4009c23176/volumes" Dec 04 06:38:22 crc kubenswrapper[4832]: I1204 06:38:22.732284 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e29e22-dcdd-42fc-b7ca-412187993b2c" path="/var/lib/kubelet/pods/d3e29e22-dcdd-42fc-b7ca-412187993b2c/volumes" Dec 04 06:38:22 crc kubenswrapper[4832]: I1204 06:38:22.839455 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" event={"ID":"4fdaa066-59c4-4491-961c-d72bb1a75243","Type":"ContainerStarted","Data":"50bea0b0910bdd03201fa08394de779a0e649da268b1de86d194728ee3a39f38"} Dec 04 06:38:22 crc kubenswrapper[4832]: I1204 06:38:22.865063 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" podStartSLOduration=2.6650269509999998 podStartE2EDuration="2.865041644s" podCreationTimestamp="2025-12-04 06:38:20 +0000 UTC" firstStartedPulling="2025-12-04 06:38:21.777324742 +0000 UTC m=+1757.390142448" lastFinishedPulling="2025-12-04 06:38:21.977339435 +0000 UTC m=+1757.590157141" observedRunningTime="2025-12-04 06:38:22.857505661 +0000 UTC m=+1758.470323377" watchObservedRunningTime="2025-12-04 06:38:22.865041644 +0000 UTC m=+1758.477859370" Dec 04 06:38:26 crc kubenswrapper[4832]: I1204 06:38:26.036899 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-dm9n4"] Dec 04 06:38:26 crc kubenswrapper[4832]: I1204 06:38:26.048786 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-dm9n4"] Dec 04 06:38:26 crc kubenswrapper[4832]: I1204 06:38:26.723825 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3afc1f7-3354-4d97-a224-c5f886599881" path="/var/lib/kubelet/pods/e3afc1f7-3354-4d97-a224-c5f886599881/volumes" Dec 04 06:38:28 crc kubenswrapper[4832]: I1204 06:38:28.712310 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:38:28 crc kubenswrapper[4832]: E1204 06:38:28.713581 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:38:39 crc kubenswrapper[4832]: I1204 06:38:39.711043 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:38:39 crc kubenswrapper[4832]: E1204 06:38:39.711967 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:38:50 crc kubenswrapper[4832]: I1204 06:38:50.710697 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:38:50 crc kubenswrapper[4832]: E1204 06:38:50.711531 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:38:57 crc kubenswrapper[4832]: I1204 06:38:57.045545 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-lckgk"] Dec 04 06:38:57 crc kubenswrapper[4832]: I1204 06:38:57.054310 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-lckgk"] Dec 04 06:38:58 crc kubenswrapper[4832]: I1204 06:38:58.032195 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-tm9nr"] Dec 04 06:38:58 crc kubenswrapper[4832]: I1204 06:38:58.041921 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-tm9nr"] Dec 04 06:38:58 crc kubenswrapper[4832]: I1204 06:38:58.721632 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a55ba05-c1ce-48f6-b8af-b3b1497554e2" path="/var/lib/kubelet/pods/1a55ba05-c1ce-48f6-b8af-b3b1497554e2/volumes" Dec 04 06:38:58 crc kubenswrapper[4832]: I1204 06:38:58.722451 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f4298e5-b22d-4f71-b682-87539fc2bae7" path="/var/lib/kubelet/pods/8f4298e5-b22d-4f71-b682-87539fc2bae7/volumes" Dec 04 06:39:03 crc kubenswrapper[4832]: I1204 06:39:03.048835 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-znj8j"] Dec 04 06:39:03 crc kubenswrapper[4832]: I1204 06:39:03.063250 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-znj8j"] Dec 04 06:39:04 crc kubenswrapper[4832]: I1204 06:39:04.734992 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e43b67ac-4870-4632-a6a2-84db802b371a" path="/var/lib/kubelet/pods/e43b67ac-4870-4632-a6a2-84db802b371a/volumes" Dec 04 06:39:05 crc kubenswrapper[4832]: I1204 06:39:05.711110 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:39:06 crc kubenswrapper[4832]: I1204 06:39:06.286918 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"8f984311e54227f0b4d82b40815aa71ea1d1ea9bcddd7d057924cdb99fbf0789"} Dec 04 06:39:07 crc kubenswrapper[4832]: I1204 06:39:07.037377 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nm7x4"] Dec 04 06:39:07 crc kubenswrapper[4832]: I1204 06:39:07.048604 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nm7x4"] Dec 04 06:39:08 crc kubenswrapper[4832]: I1204 06:39:08.725520 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ae9519-5721-4fbb-87b1-3b215638adaf" path="/var/lib/kubelet/pods/28ae9519-5721-4fbb-87b1-3b215638adaf/volumes" Dec 04 06:39:14 crc kubenswrapper[4832]: I1204 06:39:14.040280 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jggjz"] Dec 04 06:39:14 crc kubenswrapper[4832]: I1204 06:39:14.052052 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jggjz"] Dec 04 06:39:14 crc kubenswrapper[4832]: I1204 06:39:14.739295 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac8c79c-e51d-4e52-a5d1-1f8472db13b1" path="/var/lib/kubelet/pods/bac8c79c-e51d-4e52-a5d1-1f8472db13b1/volumes" Dec 04 06:39:19 crc kubenswrapper[4832]: I1204 06:39:19.716635 4832 scope.go:117] "RemoveContainer" containerID="cc43b8f3e511c7591ae9cc0bece280bc392c0e42b3a0362db85e4b8b759d589b" Dec 04 06:39:19 crc kubenswrapper[4832]: I1204 06:39:19.751881 4832 scope.go:117] "RemoveContainer" containerID="fe34b2d02b1a7cf187f58366e6eda53e755ec4a0a1f5c8f2f1bbc033b643f1d3" Dec 04 06:39:19 crc kubenswrapper[4832]: I1204 06:39:19.831791 4832 scope.go:117] "RemoveContainer" containerID="0cdda13a5f5f2372712c104a5b736ec90f5eb96f85cd80b0f0669f181c25d3bd" Dec 04 06:39:19 crc kubenswrapper[4832]: I1204 06:39:19.922335 4832 scope.go:117] "RemoveContainer" containerID="1ec08c6c9c2e2e5c28d7a348235144809cb25febbfc0c813e4b9901c2fae084c" Dec 04 06:39:19 crc kubenswrapper[4832]: I1204 06:39:19.981988 4832 scope.go:117] "RemoveContainer" containerID="0a8620eccdd41a2c81ab4d36e60b401e7aae2336bf55e67c8b07672226d3c4f8" Dec 04 06:39:20 crc kubenswrapper[4832]: I1204 06:39:20.046929 4832 scope.go:117] "RemoveContainer" containerID="f92b73253a8f18d477eebd3d46c19a0e1a150d7dbc682b1b018274b8339aa7ba" Dec 04 06:39:20 crc kubenswrapper[4832]: I1204 06:39:20.068681 4832 scope.go:117] "RemoveContainer" containerID="b08a0e6876e7886a9912a4368e604444f9d2966b86307ae429c4825d0ff33d7c" Dec 04 06:39:20 crc kubenswrapper[4832]: I1204 06:39:20.091162 4832 scope.go:117] "RemoveContainer" containerID="a819f838b63512bdad292c913abcc80ee624d6656a75afda6a6136edcf0a2f54" Dec 04 06:39:20 crc kubenswrapper[4832]: I1204 06:39:20.121130 4832 scope.go:117] "RemoveContainer" containerID="0f09073e99532314b0fe1bf6d8545ae1566d4e988b643d96fdcc7c13ccf942b0" Dec 04 06:39:20 crc kubenswrapper[4832]: I1204 06:39:20.160175 4832 scope.go:117] "RemoveContainer" containerID="2fdd0a583a96900f4953bd83e84fb9119e7628d8b7b1e0d3d5ae3e2389d88eca" Dec 04 06:39:20 crc kubenswrapper[4832]: I1204 06:39:20.220316 4832 scope.go:117] "RemoveContainer" containerID="912ef879e31ce80a1aba6c344f0f1cd2d3219d5f4850babbe664cc754b984bfa" Dec 04 06:39:28 crc kubenswrapper[4832]: I1204 06:39:28.029533 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-mxwh7"] Dec 04 06:39:28 crc kubenswrapper[4832]: I1204 06:39:28.037861 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-mxwh7"] Dec 04 06:39:28 crc kubenswrapper[4832]: I1204 06:39:28.722372 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f50b7d2-4e8d-4905-85ec-811cdd3c60d1" path="/var/lib/kubelet/pods/0f50b7d2-4e8d-4905-85ec-811cdd3c60d1/volumes" Dec 04 06:39:30 crc kubenswrapper[4832]: I1204 06:39:30.520589 4832 generic.go:334] "Generic (PLEG): container finished" podID="4fdaa066-59c4-4491-961c-d72bb1a75243" containerID="50bea0b0910bdd03201fa08394de779a0e649da268b1de86d194728ee3a39f38" exitCode=0 Dec 04 06:39:30 crc kubenswrapper[4832]: I1204 06:39:30.521011 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" event={"ID":"4fdaa066-59c4-4491-961c-d72bb1a75243","Type":"ContainerDied","Data":"50bea0b0910bdd03201fa08394de779a0e649da268b1de86d194728ee3a39f38"} Dec 04 06:39:31 crc kubenswrapper[4832]: I1204 06:39:31.992574 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.051163 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4fdaa066-59c4-4491-961c-d72bb1a75243-ssh-key\") pod \"4fdaa066-59c4-4491-961c-d72bb1a75243\" (UID: \"4fdaa066-59c4-4491-961c-d72bb1a75243\") " Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.051231 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdaa066-59c4-4491-961c-d72bb1a75243-inventory\") pod \"4fdaa066-59c4-4491-961c-d72bb1a75243\" (UID: \"4fdaa066-59c4-4491-961c-d72bb1a75243\") " Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.091957 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fdaa066-59c4-4491-961c-d72bb1a75243-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4fdaa066-59c4-4491-961c-d72bb1a75243" (UID: "4fdaa066-59c4-4491-961c-d72bb1a75243"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.095516 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fdaa066-59c4-4491-961c-d72bb1a75243-inventory" (OuterVolumeSpecName: "inventory") pod "4fdaa066-59c4-4491-961c-d72bb1a75243" (UID: "4fdaa066-59c4-4491-961c-d72bb1a75243"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.153958 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsxkb\" (UniqueName: \"kubernetes.io/projected/4fdaa066-59c4-4491-961c-d72bb1a75243-kube-api-access-rsxkb\") pod \"4fdaa066-59c4-4491-961c-d72bb1a75243\" (UID: \"4fdaa066-59c4-4491-961c-d72bb1a75243\") " Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.154568 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4fdaa066-59c4-4491-961c-d72bb1a75243-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.154591 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdaa066-59c4-4491-961c-d72bb1a75243-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.199297 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fdaa066-59c4-4491-961c-d72bb1a75243-kube-api-access-rsxkb" (OuterVolumeSpecName: "kube-api-access-rsxkb") pod "4fdaa066-59c4-4491-961c-d72bb1a75243" (UID: "4fdaa066-59c4-4491-961c-d72bb1a75243"). InnerVolumeSpecName "kube-api-access-rsxkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.257711 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsxkb\" (UniqueName: \"kubernetes.io/projected/4fdaa066-59c4-4491-961c-d72bb1a75243-kube-api-access-rsxkb\") on node \"crc\" DevicePath \"\"" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.537477 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" event={"ID":"4fdaa066-59c4-4491-961c-d72bb1a75243","Type":"ContainerDied","Data":"20ae26ad25606b95dfb39812914bbcf61768ca03b080e032cada456845f67c72"} Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.537522 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20ae26ad25606b95dfb39812914bbcf61768ca03b080e032cada456845f67c72" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.537576 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.651484 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7"] Dec 04 06:39:32 crc kubenswrapper[4832]: E1204 06:39:32.652338 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdaa066-59c4-4491-961c-d72bb1a75243" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.652372 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdaa066-59c4-4491-961c-d72bb1a75243" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.652800 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fdaa066-59c4-4491-961c-d72bb1a75243" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.654319 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.657953 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.659782 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.659782 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.659912 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.666744 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7"] Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.667976 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-smwz7\" (UID: \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.668350 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-smwz7\" (UID: \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.668642 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pvzs\" (UniqueName: \"kubernetes.io/projected/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-kube-api-access-5pvzs\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-smwz7\" (UID: \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.771931 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pvzs\" (UniqueName: \"kubernetes.io/projected/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-kube-api-access-5pvzs\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-smwz7\" (UID: \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.772062 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-smwz7\" (UID: \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.772158 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-smwz7\" (UID: \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.778534 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-smwz7\" (UID: \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.779002 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-smwz7\" (UID: \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.792358 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pvzs\" (UniqueName: \"kubernetes.io/projected/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-kube-api-access-5pvzs\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-smwz7\" (UID: \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:32 crc kubenswrapper[4832]: I1204 06:39:32.972595 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:33 crc kubenswrapper[4832]: I1204 06:39:33.481541 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7"] Dec 04 06:39:33 crc kubenswrapper[4832]: I1204 06:39:33.548370 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" event={"ID":"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac","Type":"ContainerStarted","Data":"9d69d362a0f9b39c632e02f3e83d34b4aa65739634187f8bbb315cbc6ea2789a"} Dec 04 06:39:34 crc kubenswrapper[4832]: I1204 06:39:34.557734 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" event={"ID":"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac","Type":"ContainerStarted","Data":"02d55f5f2f5498fb4fc4d4139e856fec64a9b5ba070d7deeda24ff68d9ab7c60"} Dec 04 06:39:34 crc kubenswrapper[4832]: I1204 06:39:34.577420 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" podStartSLOduration=2.38493519 podStartE2EDuration="2.577405719s" podCreationTimestamp="2025-12-04 06:39:32 +0000 UTC" firstStartedPulling="2025-12-04 06:39:33.493739015 +0000 UTC m=+1829.106556721" lastFinishedPulling="2025-12-04 06:39:33.686209544 +0000 UTC m=+1829.299027250" observedRunningTime="2025-12-04 06:39:34.575067031 +0000 UTC m=+1830.187884747" watchObservedRunningTime="2025-12-04 06:39:34.577405719 +0000 UTC m=+1830.190223425" Dec 04 06:39:38 crc kubenswrapper[4832]: I1204 06:39:38.596119 4832 generic.go:334] "Generic (PLEG): container finished" podID="c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac" containerID="02d55f5f2f5498fb4fc4d4139e856fec64a9b5ba070d7deeda24ff68d9ab7c60" exitCode=0 Dec 04 06:39:38 crc kubenswrapper[4832]: I1204 06:39:38.596203 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" event={"ID":"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac","Type":"ContainerDied","Data":"02d55f5f2f5498fb4fc4d4139e856fec64a9b5ba070d7deeda24ff68d9ab7c60"} Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.020712 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.108437 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-ssh-key\") pod \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\" (UID: \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\") " Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.108631 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-inventory\") pod \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\" (UID: \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\") " Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.108737 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pvzs\" (UniqueName: \"kubernetes.io/projected/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-kube-api-access-5pvzs\") pod \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\" (UID: \"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac\") " Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.116460 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-kube-api-access-5pvzs" (OuterVolumeSpecName: "kube-api-access-5pvzs") pod "c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac" (UID: "c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac"). InnerVolumeSpecName "kube-api-access-5pvzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.138048 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac" (UID: "c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.146747 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-inventory" (OuterVolumeSpecName: "inventory") pod "c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac" (UID: "c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.210475 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.210505 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pvzs\" (UniqueName: \"kubernetes.io/projected/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-kube-api-access-5pvzs\") on node \"crc\" DevicePath \"\"" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.210515 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.616278 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" event={"ID":"c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac","Type":"ContainerDied","Data":"9d69d362a0f9b39c632e02f3e83d34b4aa65739634187f8bbb315cbc6ea2789a"} Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.616321 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d69d362a0f9b39c632e02f3e83d34b4aa65739634187f8bbb315cbc6ea2789a" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.616353 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-smwz7" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.694549 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp"] Dec 04 06:39:40 crc kubenswrapper[4832]: E1204 06:39:40.695555 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.695572 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.696111 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.697321 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.705731 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.706001 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.707494 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.707665 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.737883 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp"] Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.828844 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/069bfa79-a14b-4545-a791-be3f21ed774f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nhghp\" (UID: \"069bfa79-a14b-4545-a791-be3f21ed774f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.828904 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/069bfa79-a14b-4545-a791-be3f21ed774f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nhghp\" (UID: \"069bfa79-a14b-4545-a791-be3f21ed774f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.828973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz78f\" (UniqueName: \"kubernetes.io/projected/069bfa79-a14b-4545-a791-be3f21ed774f-kube-api-access-jz78f\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nhghp\" (UID: \"069bfa79-a14b-4545-a791-be3f21ed774f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.931336 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/069bfa79-a14b-4545-a791-be3f21ed774f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nhghp\" (UID: \"069bfa79-a14b-4545-a791-be3f21ed774f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.931719 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/069bfa79-a14b-4545-a791-be3f21ed774f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nhghp\" (UID: \"069bfa79-a14b-4545-a791-be3f21ed774f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.931837 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz78f\" (UniqueName: \"kubernetes.io/projected/069bfa79-a14b-4545-a791-be3f21ed774f-kube-api-access-jz78f\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nhghp\" (UID: \"069bfa79-a14b-4545-a791-be3f21ed774f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.937245 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/069bfa79-a14b-4545-a791-be3f21ed774f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nhghp\" (UID: \"069bfa79-a14b-4545-a791-be3f21ed774f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.945166 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/069bfa79-a14b-4545-a791-be3f21ed774f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nhghp\" (UID: \"069bfa79-a14b-4545-a791-be3f21ed774f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:39:40 crc kubenswrapper[4832]: I1204 06:39:40.950919 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz78f\" (UniqueName: \"kubernetes.io/projected/069bfa79-a14b-4545-a791-be3f21ed774f-kube-api-access-jz78f\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nhghp\" (UID: \"069bfa79-a14b-4545-a791-be3f21ed774f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:39:41 crc kubenswrapper[4832]: I1204 06:39:41.031091 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:39:41 crc kubenswrapper[4832]: I1204 06:39:41.595823 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp"] Dec 04 06:39:41 crc kubenswrapper[4832]: I1204 06:39:41.634715 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" event={"ID":"069bfa79-a14b-4545-a791-be3f21ed774f","Type":"ContainerStarted","Data":"91b556457895834c0aeea7daf8179a523105d30a25591ec0a246b8e3996abb93"} Dec 04 06:39:42 crc kubenswrapper[4832]: I1204 06:39:42.643565 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" event={"ID":"069bfa79-a14b-4545-a791-be3f21ed774f","Type":"ContainerStarted","Data":"c85d4694382e17c1e84fa14ac6ceeb5e57678e52a70da54c05e49ea3762b2435"} Dec 04 06:39:42 crc kubenswrapper[4832]: I1204 06:39:42.660675 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" podStartSLOduration=2.489230194 podStartE2EDuration="2.66062925s" podCreationTimestamp="2025-12-04 06:39:40 +0000 UTC" firstStartedPulling="2025-12-04 06:39:41.599639249 +0000 UTC m=+1837.212456955" lastFinishedPulling="2025-12-04 06:39:41.771038305 +0000 UTC m=+1837.383856011" observedRunningTime="2025-12-04 06:39:42.659727528 +0000 UTC m=+1838.272545244" watchObservedRunningTime="2025-12-04 06:39:42.66062925 +0000 UTC m=+1838.273446966" Dec 04 06:40:01 crc kubenswrapper[4832]: I1204 06:40:01.075767 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-p55hz"] Dec 04 06:40:01 crc kubenswrapper[4832]: I1204 06:40:01.094143 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vzgts"] Dec 04 06:40:01 crc kubenswrapper[4832]: I1204 06:40:01.110441 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-p55hz"] Dec 04 06:40:01 crc kubenswrapper[4832]: I1204 06:40:01.119526 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-564c-account-create-update-62wgk"] Dec 04 06:40:01 crc kubenswrapper[4832]: I1204 06:40:01.128841 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0b65-account-create-update-r8vb2"] Dec 04 06:40:01 crc kubenswrapper[4832]: I1204 06:40:01.142617 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5b4b-account-create-update-bfrhh"] Dec 04 06:40:01 crc kubenswrapper[4832]: I1204 06:40:01.153075 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vzgts"] Dec 04 06:40:01 crc kubenswrapper[4832]: I1204 06:40:01.179744 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0b65-account-create-update-r8vb2"] Dec 04 06:40:01 crc kubenswrapper[4832]: I1204 06:40:01.195478 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-564c-account-create-update-62wgk"] Dec 04 06:40:01 crc kubenswrapper[4832]: I1204 06:40:01.207128 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-ch62d"] Dec 04 06:40:01 crc kubenswrapper[4832]: I1204 06:40:01.221662 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5b4b-account-create-update-bfrhh"] Dec 04 06:40:01 crc kubenswrapper[4832]: I1204 06:40:01.233789 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-ch62d"] Dec 04 06:40:02 crc kubenswrapper[4832]: I1204 06:40:02.725779 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3922f617-41ab-48fa-a501-caa685e933e0" path="/var/lib/kubelet/pods/3922f617-41ab-48fa-a501-caa685e933e0/volumes" Dec 04 06:40:02 crc kubenswrapper[4832]: I1204 06:40:02.727052 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f0967a7-c40d-45f5-b32f-ff14f40d5337" path="/var/lib/kubelet/pods/7f0967a7-c40d-45f5-b32f-ff14f40d5337/volumes" Dec 04 06:40:02 crc kubenswrapper[4832]: I1204 06:40:02.727715 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9531a1a5-90a1-487d-a8ea-532746866ae1" path="/var/lib/kubelet/pods/9531a1a5-90a1-487d-a8ea-532746866ae1/volumes" Dec 04 06:40:02 crc kubenswrapper[4832]: I1204 06:40:02.728278 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a675f899-d638-40e8-a597-daa574df9e75" path="/var/lib/kubelet/pods/a675f899-d638-40e8-a597-daa574df9e75/volumes" Dec 04 06:40:02 crc kubenswrapper[4832]: I1204 06:40:02.729335 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc636c61-b131-450a-b60b-d205ea0a3c36" path="/var/lib/kubelet/pods/cc636c61-b131-450a-b60b-d205ea0a3c36/volumes" Dec 04 06:40:02 crc kubenswrapper[4832]: I1204 06:40:02.729914 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4561bf4-2c44-4350-8426-4353129c50cf" path="/var/lib/kubelet/pods/e4561bf4-2c44-4350-8426-4353129c50cf/volumes" Dec 04 06:40:20 crc kubenswrapper[4832]: I1204 06:40:20.456644 4832 scope.go:117] "RemoveContainer" containerID="376e49d0330626302f245ceb643cb87e52e1ef208a5250a390ec14b6dc9cb5c9" Dec 04 06:40:20 crc kubenswrapper[4832]: I1204 06:40:20.526415 4832 scope.go:117] "RemoveContainer" containerID="7ac4b67a414bf1356549e551fe6a70463d772bb4b325013d4c798be259a2ac81" Dec 04 06:40:20 crc kubenswrapper[4832]: I1204 06:40:20.563268 4832 scope.go:117] "RemoveContainer" containerID="f2fcf9d889fff4f05b6ad596fd02d9f873f5b31e52fc0b8dac4f1d8899287264" Dec 04 06:40:20 crc kubenswrapper[4832]: I1204 06:40:20.609484 4832 scope.go:117] "RemoveContainer" containerID="ecd076b1e891728d2757fbe9b9ffaaffa76d8a3ab7dfd109033c5cd99e002c63" Dec 04 06:40:20 crc kubenswrapper[4832]: I1204 06:40:20.677420 4832 scope.go:117] "RemoveContainer" containerID="f00930ef4636333fb39683a7caf25dcd248e97285f840eec1315cd68224d2120" Dec 04 06:40:20 crc kubenswrapper[4832]: I1204 06:40:20.704719 4832 scope.go:117] "RemoveContainer" containerID="3f0596a5f682a544afc50adfeb9796e44bf6f456ab1458a4ea5a1c9e141b7e5c" Dec 04 06:40:20 crc kubenswrapper[4832]: I1204 06:40:20.758020 4832 scope.go:117] "RemoveContainer" containerID="011aa75a50cc64e419b29504acd836af8f2815a8797018b03b702f532459f8ec" Dec 04 06:40:21 crc kubenswrapper[4832]: I1204 06:40:21.097525 4832 generic.go:334] "Generic (PLEG): container finished" podID="069bfa79-a14b-4545-a791-be3f21ed774f" containerID="c85d4694382e17c1e84fa14ac6ceeb5e57678e52a70da54c05e49ea3762b2435" exitCode=0 Dec 04 06:40:21 crc kubenswrapper[4832]: I1204 06:40:21.097602 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" event={"ID":"069bfa79-a14b-4545-a791-be3f21ed774f","Type":"ContainerDied","Data":"c85d4694382e17c1e84fa14ac6ceeb5e57678e52a70da54c05e49ea3762b2435"} Dec 04 06:40:22 crc kubenswrapper[4832]: I1204 06:40:22.543586 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:40:22 crc kubenswrapper[4832]: I1204 06:40:22.714020 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/069bfa79-a14b-4545-a791-be3f21ed774f-inventory\") pod \"069bfa79-a14b-4545-a791-be3f21ed774f\" (UID: \"069bfa79-a14b-4545-a791-be3f21ed774f\") " Dec 04 06:40:22 crc kubenswrapper[4832]: I1204 06:40:22.714166 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz78f\" (UniqueName: \"kubernetes.io/projected/069bfa79-a14b-4545-a791-be3f21ed774f-kube-api-access-jz78f\") pod \"069bfa79-a14b-4545-a791-be3f21ed774f\" (UID: \"069bfa79-a14b-4545-a791-be3f21ed774f\") " Dec 04 06:40:22 crc kubenswrapper[4832]: I1204 06:40:22.714241 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/069bfa79-a14b-4545-a791-be3f21ed774f-ssh-key\") pod \"069bfa79-a14b-4545-a791-be3f21ed774f\" (UID: \"069bfa79-a14b-4545-a791-be3f21ed774f\") " Dec 04 06:40:22 crc kubenswrapper[4832]: I1204 06:40:22.723517 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069bfa79-a14b-4545-a791-be3f21ed774f-kube-api-access-jz78f" (OuterVolumeSpecName: "kube-api-access-jz78f") pod "069bfa79-a14b-4545-a791-be3f21ed774f" (UID: "069bfa79-a14b-4545-a791-be3f21ed774f"). InnerVolumeSpecName "kube-api-access-jz78f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:40:22 crc kubenswrapper[4832]: I1204 06:40:22.746614 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069bfa79-a14b-4545-a791-be3f21ed774f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "069bfa79-a14b-4545-a791-be3f21ed774f" (UID: "069bfa79-a14b-4545-a791-be3f21ed774f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:40:22 crc kubenswrapper[4832]: I1204 06:40:22.752764 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069bfa79-a14b-4545-a791-be3f21ed774f-inventory" (OuterVolumeSpecName: "inventory") pod "069bfa79-a14b-4545-a791-be3f21ed774f" (UID: "069bfa79-a14b-4545-a791-be3f21ed774f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:40:22 crc kubenswrapper[4832]: I1204 06:40:22.817784 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/069bfa79-a14b-4545-a791-be3f21ed774f-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:40:22 crc kubenswrapper[4832]: I1204 06:40:22.818192 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz78f\" (UniqueName: \"kubernetes.io/projected/069bfa79-a14b-4545-a791-be3f21ed774f-kube-api-access-jz78f\") on node \"crc\" DevicePath \"\"" Dec 04 06:40:22 crc kubenswrapper[4832]: I1204 06:40:22.818284 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/069bfa79-a14b-4545-a791-be3f21ed774f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.131511 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" event={"ID":"069bfa79-a14b-4545-a791-be3f21ed774f","Type":"ContainerDied","Data":"91b556457895834c0aeea7daf8179a523105d30a25591ec0a246b8e3996abb93"} Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.131580 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b556457895834c0aeea7daf8179a523105d30a25591ec0a246b8e3996abb93" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.131723 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nhghp" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.220996 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm"] Dec 04 06:40:23 crc kubenswrapper[4832]: E1204 06:40:23.221483 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069bfa79-a14b-4545-a791-be3f21ed774f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.221506 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="069bfa79-a14b-4545-a791-be3f21ed774f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.221731 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="069bfa79-a14b-4545-a791-be3f21ed774f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.222352 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.225381 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.225734 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.227015 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.227225 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.237088 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm"] Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.334305 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/452e59ff-3e14-4082-b812-ff4d5d671b27-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm\" (UID: \"452e59ff-3e14-4082-b812-ff4d5d671b27\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.334484 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24frq\" (UniqueName: \"kubernetes.io/projected/452e59ff-3e14-4082-b812-ff4d5d671b27-kube-api-access-24frq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm\" (UID: \"452e59ff-3e14-4082-b812-ff4d5d671b27\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.334949 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/452e59ff-3e14-4082-b812-ff4d5d671b27-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm\" (UID: \"452e59ff-3e14-4082-b812-ff4d5d671b27\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.437761 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/452e59ff-3e14-4082-b812-ff4d5d671b27-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm\" (UID: \"452e59ff-3e14-4082-b812-ff4d5d671b27\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.437904 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/452e59ff-3e14-4082-b812-ff4d5d671b27-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm\" (UID: \"452e59ff-3e14-4082-b812-ff4d5d671b27\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.437945 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24frq\" (UniqueName: \"kubernetes.io/projected/452e59ff-3e14-4082-b812-ff4d5d671b27-kube-api-access-24frq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm\" (UID: \"452e59ff-3e14-4082-b812-ff4d5d671b27\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.444289 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/452e59ff-3e14-4082-b812-ff4d5d671b27-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm\" (UID: \"452e59ff-3e14-4082-b812-ff4d5d671b27\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.446777 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/452e59ff-3e14-4082-b812-ff4d5d671b27-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm\" (UID: \"452e59ff-3e14-4082-b812-ff4d5d671b27\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.457888 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24frq\" (UniqueName: \"kubernetes.io/projected/452e59ff-3e14-4082-b812-ff4d5d671b27-kube-api-access-24frq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm\" (UID: \"452e59ff-3e14-4082-b812-ff4d5d671b27\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:40:23 crc kubenswrapper[4832]: I1204 06:40:23.557635 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:40:24 crc kubenswrapper[4832]: I1204 06:40:24.103693 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm"] Dec 04 06:40:24 crc kubenswrapper[4832]: I1204 06:40:24.141433 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" event={"ID":"452e59ff-3e14-4082-b812-ff4d5d671b27","Type":"ContainerStarted","Data":"ec589f79a8f19b433f74618a25923855fc90bc0eab75d29463c5a9661ab2bb59"} Dec 04 06:40:25 crc kubenswrapper[4832]: I1204 06:40:25.156669 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" event={"ID":"452e59ff-3e14-4082-b812-ff4d5d671b27","Type":"ContainerStarted","Data":"0c1a26e8bc59f76d711d4801d07acdc244d6661cd17a55971c760deb652b3c5d"} Dec 04 06:40:25 crc kubenswrapper[4832]: I1204 06:40:25.185181 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" podStartSLOduration=2.026444552 podStartE2EDuration="2.185152026s" podCreationTimestamp="2025-12-04 06:40:23 +0000 UTC" firstStartedPulling="2025-12-04 06:40:24.108605495 +0000 UTC m=+1879.721423201" lastFinishedPulling="2025-12-04 06:40:24.267312969 +0000 UTC m=+1879.880130675" observedRunningTime="2025-12-04 06:40:25.181994009 +0000 UTC m=+1880.794811715" watchObservedRunningTime="2025-12-04 06:40:25.185152026 +0000 UTC m=+1880.797969732" Dec 04 06:40:33 crc kubenswrapper[4832]: I1204 06:40:33.050500 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rnc6z"] Dec 04 06:40:33 crc kubenswrapper[4832]: I1204 06:40:33.064738 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rnc6z"] Dec 04 06:40:34 crc kubenswrapper[4832]: I1204 06:40:34.726277 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd907035-aa8f-4dd1-bc4d-06eb3fde3b49" path="/var/lib/kubelet/pods/bd907035-aa8f-4dd1-bc4d-06eb3fde3b49/volumes" Dec 04 06:40:52 crc kubenswrapper[4832]: I1204 06:40:52.079960 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-64xph"] Dec 04 06:40:52 crc kubenswrapper[4832]: I1204 06:40:52.098344 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-64xph"] Dec 04 06:40:52 crc kubenswrapper[4832]: I1204 06:40:52.721771 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cdd38c2-1620-41ab-bb2e-7a82a7a0858e" path="/var/lib/kubelet/pods/4cdd38c2-1620-41ab-bb2e-7a82a7a0858e/volumes" Dec 04 06:40:53 crc kubenswrapper[4832]: I1204 06:40:53.040422 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9929d"] Dec 04 06:40:53 crc kubenswrapper[4832]: I1204 06:40:53.050255 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9929d"] Dec 04 06:40:54 crc kubenswrapper[4832]: I1204 06:40:54.723458 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933ec7f2-4591-4b3c-b681-d97d7ef7d41d" path="/var/lib/kubelet/pods/933ec7f2-4591-4b3c-b681-d97d7ef7d41d/volumes" Dec 04 06:41:05 crc kubenswrapper[4832]: I1204 06:41:05.363092 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:41:05 crc kubenswrapper[4832]: I1204 06:41:05.363714 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:41:14 crc kubenswrapper[4832]: I1204 06:41:14.683290 4832 generic.go:334] "Generic (PLEG): container finished" podID="452e59ff-3e14-4082-b812-ff4d5d671b27" containerID="0c1a26e8bc59f76d711d4801d07acdc244d6661cd17a55971c760deb652b3c5d" exitCode=0 Dec 04 06:41:14 crc kubenswrapper[4832]: I1204 06:41:14.683369 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" event={"ID":"452e59ff-3e14-4082-b812-ff4d5d671b27","Type":"ContainerDied","Data":"0c1a26e8bc59f76d711d4801d07acdc244d6661cd17a55971c760deb652b3c5d"} Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.096704 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.166865 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/452e59ff-3e14-4082-b812-ff4d5d671b27-inventory\") pod \"452e59ff-3e14-4082-b812-ff4d5d671b27\" (UID: \"452e59ff-3e14-4082-b812-ff4d5d671b27\") " Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.167516 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/452e59ff-3e14-4082-b812-ff4d5d671b27-ssh-key\") pod \"452e59ff-3e14-4082-b812-ff4d5d671b27\" (UID: \"452e59ff-3e14-4082-b812-ff4d5d671b27\") " Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.167647 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24frq\" (UniqueName: \"kubernetes.io/projected/452e59ff-3e14-4082-b812-ff4d5d671b27-kube-api-access-24frq\") pod \"452e59ff-3e14-4082-b812-ff4d5d671b27\" (UID: \"452e59ff-3e14-4082-b812-ff4d5d671b27\") " Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.176146 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452e59ff-3e14-4082-b812-ff4d5d671b27-kube-api-access-24frq" (OuterVolumeSpecName: "kube-api-access-24frq") pod "452e59ff-3e14-4082-b812-ff4d5d671b27" (UID: "452e59ff-3e14-4082-b812-ff4d5d671b27"). InnerVolumeSpecName "kube-api-access-24frq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.202450 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452e59ff-3e14-4082-b812-ff4d5d671b27-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "452e59ff-3e14-4082-b812-ff4d5d671b27" (UID: "452e59ff-3e14-4082-b812-ff4d5d671b27"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.206493 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452e59ff-3e14-4082-b812-ff4d5d671b27-inventory" (OuterVolumeSpecName: "inventory") pod "452e59ff-3e14-4082-b812-ff4d5d671b27" (UID: "452e59ff-3e14-4082-b812-ff4d5d671b27"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.269756 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/452e59ff-3e14-4082-b812-ff4d5d671b27-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.269796 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24frq\" (UniqueName: \"kubernetes.io/projected/452e59ff-3e14-4082-b812-ff4d5d671b27-kube-api-access-24frq\") on node \"crc\" DevicePath \"\"" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.269808 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/452e59ff-3e14-4082-b812-ff4d5d671b27-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.705770 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" event={"ID":"452e59ff-3e14-4082-b812-ff4d5d671b27","Type":"ContainerDied","Data":"ec589f79a8f19b433f74618a25923855fc90bc0eab75d29463c5a9661ab2bb59"} Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.705815 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec589f79a8f19b433f74618a25923855fc90bc0eab75d29463c5a9661ab2bb59" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.705861 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.844997 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ztwr8"] Dec 04 06:41:16 crc kubenswrapper[4832]: E1204 06:41:16.845520 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452e59ff-3e14-4082-b812-ff4d5d671b27" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.845546 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="452e59ff-3e14-4082-b812-ff4d5d671b27" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.845767 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="452e59ff-3e14-4082-b812-ff4d5d671b27" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.846775 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.849274 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.849719 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.849974 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.852592 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.856787 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ztwr8"] Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.880536 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/54ad7a01-5a9d-4735-b86d-391a24a663ad-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ztwr8\" (UID: \"54ad7a01-5a9d-4735-b86d-391a24a663ad\") " pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.881039 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54ad7a01-5a9d-4735-b86d-391a24a663ad-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ztwr8\" (UID: \"54ad7a01-5a9d-4735-b86d-391a24a663ad\") " pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.881106 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps45s\" (UniqueName: \"kubernetes.io/projected/54ad7a01-5a9d-4735-b86d-391a24a663ad-kube-api-access-ps45s\") pod \"ssh-known-hosts-edpm-deployment-ztwr8\" (UID: \"54ad7a01-5a9d-4735-b86d-391a24a663ad\") " pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.982717 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54ad7a01-5a9d-4735-b86d-391a24a663ad-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ztwr8\" (UID: \"54ad7a01-5a9d-4735-b86d-391a24a663ad\") " pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.982769 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps45s\" (UniqueName: \"kubernetes.io/projected/54ad7a01-5a9d-4735-b86d-391a24a663ad-kube-api-access-ps45s\") pod \"ssh-known-hosts-edpm-deployment-ztwr8\" (UID: \"54ad7a01-5a9d-4735-b86d-391a24a663ad\") " pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.982849 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/54ad7a01-5a9d-4735-b86d-391a24a663ad-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ztwr8\" (UID: \"54ad7a01-5a9d-4735-b86d-391a24a663ad\") " pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.990119 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/54ad7a01-5a9d-4735-b86d-391a24a663ad-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ztwr8\" (UID: \"54ad7a01-5a9d-4735-b86d-391a24a663ad\") " pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:16 crc kubenswrapper[4832]: I1204 06:41:16.996785 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54ad7a01-5a9d-4735-b86d-391a24a663ad-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ztwr8\" (UID: \"54ad7a01-5a9d-4735-b86d-391a24a663ad\") " pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:17 crc kubenswrapper[4832]: I1204 06:41:17.017179 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps45s\" (UniqueName: \"kubernetes.io/projected/54ad7a01-5a9d-4735-b86d-391a24a663ad-kube-api-access-ps45s\") pod \"ssh-known-hosts-edpm-deployment-ztwr8\" (UID: \"54ad7a01-5a9d-4735-b86d-391a24a663ad\") " pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:17 crc kubenswrapper[4832]: I1204 06:41:17.163937 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:17 crc kubenswrapper[4832]: I1204 06:41:17.731147 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ztwr8"] Dec 04 06:41:18 crc kubenswrapper[4832]: I1204 06:41:18.727434 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" event={"ID":"54ad7a01-5a9d-4735-b86d-391a24a663ad","Type":"ContainerStarted","Data":"59ebc6916033d2ac92d8343d116e8e21ae2af87b92209df07bc8b9f1075b7eca"} Dec 04 06:41:18 crc kubenswrapper[4832]: I1204 06:41:18.727930 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" event={"ID":"54ad7a01-5a9d-4735-b86d-391a24a663ad","Type":"ContainerStarted","Data":"6f1f27358477138e945d772ba18ff1b769b1a5cf11cdcb04dd7aa56513bdf6fa"} Dec 04 06:41:18 crc kubenswrapper[4832]: I1204 06:41:18.754058 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" podStartSLOduration=2.529463607 podStartE2EDuration="2.754030985s" podCreationTimestamp="2025-12-04 06:41:16 +0000 UTC" firstStartedPulling="2025-12-04 06:41:17.743492292 +0000 UTC m=+1933.356309998" lastFinishedPulling="2025-12-04 06:41:17.96805966 +0000 UTC m=+1933.580877376" observedRunningTime="2025-12-04 06:41:18.745702343 +0000 UTC m=+1934.358520049" watchObservedRunningTime="2025-12-04 06:41:18.754030985 +0000 UTC m=+1934.366848691" Dec 04 06:41:20 crc kubenswrapper[4832]: I1204 06:41:20.898608 4832 scope.go:117] "RemoveContainer" containerID="e553fd559fcf2eb2a0c3dc8cb7bab8282ab44e3610cdf75f21922629a3127a45" Dec 04 06:41:20 crc kubenswrapper[4832]: I1204 06:41:20.981200 4832 scope.go:117] "RemoveContainer" containerID="a612f43ec383e04b6d664e1d71b42aeb185003c310a63c9f24ff85edc33ddd69" Dec 04 06:41:21 crc kubenswrapper[4832]: I1204 06:41:21.062358 4832 scope.go:117] "RemoveContainer" containerID="3f0375403c06179d4a3439ef48a3d7f37b4ee4c8b99d700f62f8f41eba064d98" Dec 04 06:41:25 crc kubenswrapper[4832]: I1204 06:41:25.810551 4832 generic.go:334] "Generic (PLEG): container finished" podID="54ad7a01-5a9d-4735-b86d-391a24a663ad" containerID="59ebc6916033d2ac92d8343d116e8e21ae2af87b92209df07bc8b9f1075b7eca" exitCode=0 Dec 04 06:41:25 crc kubenswrapper[4832]: I1204 06:41:25.810668 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" event={"ID":"54ad7a01-5a9d-4735-b86d-391a24a663ad","Type":"ContainerDied","Data":"59ebc6916033d2ac92d8343d116e8e21ae2af87b92209df07bc8b9f1075b7eca"} Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.272544 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.327297 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps45s\" (UniqueName: \"kubernetes.io/projected/54ad7a01-5a9d-4735-b86d-391a24a663ad-kube-api-access-ps45s\") pod \"54ad7a01-5a9d-4735-b86d-391a24a663ad\" (UID: \"54ad7a01-5a9d-4735-b86d-391a24a663ad\") " Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.327354 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54ad7a01-5a9d-4735-b86d-391a24a663ad-ssh-key-openstack-edpm-ipam\") pod \"54ad7a01-5a9d-4735-b86d-391a24a663ad\" (UID: \"54ad7a01-5a9d-4735-b86d-391a24a663ad\") " Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.327492 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/54ad7a01-5a9d-4735-b86d-391a24a663ad-inventory-0\") pod \"54ad7a01-5a9d-4735-b86d-391a24a663ad\" (UID: \"54ad7a01-5a9d-4735-b86d-391a24a663ad\") " Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.339915 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54ad7a01-5a9d-4735-b86d-391a24a663ad-kube-api-access-ps45s" (OuterVolumeSpecName: "kube-api-access-ps45s") pod "54ad7a01-5a9d-4735-b86d-391a24a663ad" (UID: "54ad7a01-5a9d-4735-b86d-391a24a663ad"). InnerVolumeSpecName "kube-api-access-ps45s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.359946 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54ad7a01-5a9d-4735-b86d-391a24a663ad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "54ad7a01-5a9d-4735-b86d-391a24a663ad" (UID: "54ad7a01-5a9d-4735-b86d-391a24a663ad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.385586 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54ad7a01-5a9d-4735-b86d-391a24a663ad-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "54ad7a01-5a9d-4735-b86d-391a24a663ad" (UID: "54ad7a01-5a9d-4735-b86d-391a24a663ad"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.429898 4832 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/54ad7a01-5a9d-4735-b86d-391a24a663ad-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.429941 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps45s\" (UniqueName: \"kubernetes.io/projected/54ad7a01-5a9d-4735-b86d-391a24a663ad-kube-api-access-ps45s\") on node \"crc\" DevicePath \"\"" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.429956 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/54ad7a01-5a9d-4735-b86d-391a24a663ad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.834754 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" event={"ID":"54ad7a01-5a9d-4735-b86d-391a24a663ad","Type":"ContainerDied","Data":"6f1f27358477138e945d772ba18ff1b769b1a5cf11cdcb04dd7aa56513bdf6fa"} Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.834812 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f1f27358477138e945d772ba18ff1b769b1a5cf11cdcb04dd7aa56513bdf6fa" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.834856 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ztwr8" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.912494 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb"] Dec 04 06:41:27 crc kubenswrapper[4832]: E1204 06:41:27.913260 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ad7a01-5a9d-4735-b86d-391a24a663ad" containerName="ssh-known-hosts-edpm-deployment" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.913307 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ad7a01-5a9d-4735-b86d-391a24a663ad" containerName="ssh-known-hosts-edpm-deployment" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.913711 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="54ad7a01-5a9d-4735-b86d-391a24a663ad" containerName="ssh-known-hosts-edpm-deployment" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.914968 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.919523 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.919840 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.919927 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.921343 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.932950 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb"] Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.940439 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbeb7492-95ba-4887-afee-a0fada68f151-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-95gcb\" (UID: \"cbeb7492-95ba-4887-afee-a0fada68f151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.940687 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wch9\" (UniqueName: \"kubernetes.io/projected/cbeb7492-95ba-4887-afee-a0fada68f151-kube-api-access-9wch9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-95gcb\" (UID: \"cbeb7492-95ba-4887-afee-a0fada68f151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:27 crc kubenswrapper[4832]: I1204 06:41:27.940750 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbeb7492-95ba-4887-afee-a0fada68f151-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-95gcb\" (UID: \"cbeb7492-95ba-4887-afee-a0fada68f151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:28 crc kubenswrapper[4832]: I1204 06:41:28.043806 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbeb7492-95ba-4887-afee-a0fada68f151-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-95gcb\" (UID: \"cbeb7492-95ba-4887-afee-a0fada68f151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:28 crc kubenswrapper[4832]: I1204 06:41:28.043943 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wch9\" (UniqueName: \"kubernetes.io/projected/cbeb7492-95ba-4887-afee-a0fada68f151-kube-api-access-9wch9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-95gcb\" (UID: \"cbeb7492-95ba-4887-afee-a0fada68f151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:28 crc kubenswrapper[4832]: I1204 06:41:28.043970 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbeb7492-95ba-4887-afee-a0fada68f151-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-95gcb\" (UID: \"cbeb7492-95ba-4887-afee-a0fada68f151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:28 crc kubenswrapper[4832]: I1204 06:41:28.048814 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbeb7492-95ba-4887-afee-a0fada68f151-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-95gcb\" (UID: \"cbeb7492-95ba-4887-afee-a0fada68f151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:28 crc kubenswrapper[4832]: I1204 06:41:28.049228 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbeb7492-95ba-4887-afee-a0fada68f151-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-95gcb\" (UID: \"cbeb7492-95ba-4887-afee-a0fada68f151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:28 crc kubenswrapper[4832]: I1204 06:41:28.072801 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wch9\" (UniqueName: \"kubernetes.io/projected/cbeb7492-95ba-4887-afee-a0fada68f151-kube-api-access-9wch9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-95gcb\" (UID: \"cbeb7492-95ba-4887-afee-a0fada68f151\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:28 crc kubenswrapper[4832]: I1204 06:41:28.243231 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:28 crc kubenswrapper[4832]: I1204 06:41:28.817806 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb"] Dec 04 06:41:28 crc kubenswrapper[4832]: I1204 06:41:28.847615 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" event={"ID":"cbeb7492-95ba-4887-afee-a0fada68f151","Type":"ContainerStarted","Data":"798cc9a92c039cd8ba563aa594468e5b9c25ea1547cdd6102f5b9b8fbb501e5c"} Dec 04 06:41:29 crc kubenswrapper[4832]: I1204 06:41:29.859808 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" event={"ID":"cbeb7492-95ba-4887-afee-a0fada68f151","Type":"ContainerStarted","Data":"aafe7dc58df11e9f5b06e920bbe65e7d8e3931c3af5b88df7e5ffa25f72b4c6b"} Dec 04 06:41:29 crc kubenswrapper[4832]: I1204 06:41:29.877371 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" podStartSLOduration=2.684879032 podStartE2EDuration="2.877341549s" podCreationTimestamp="2025-12-04 06:41:27 +0000 UTC" firstStartedPulling="2025-12-04 06:41:28.823961441 +0000 UTC m=+1944.436779147" lastFinishedPulling="2025-12-04 06:41:29.016423958 +0000 UTC m=+1944.629241664" observedRunningTime="2025-12-04 06:41:29.875071293 +0000 UTC m=+1945.487889019" watchObservedRunningTime="2025-12-04 06:41:29.877341549 +0000 UTC m=+1945.490159255" Dec 04 06:41:35 crc kubenswrapper[4832]: I1204 06:41:35.362609 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:41:35 crc kubenswrapper[4832]: I1204 06:41:35.363746 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:41:37 crc kubenswrapper[4832]: I1204 06:41:37.053315 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xbxqr"] Dec 04 06:41:37 crc kubenswrapper[4832]: I1204 06:41:37.068456 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xbxqr"] Dec 04 06:41:38 crc kubenswrapper[4832]: I1204 06:41:38.732849 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b22a38d3-115d-4317-844d-65b82c8dea97" path="/var/lib/kubelet/pods/b22a38d3-115d-4317-844d-65b82c8dea97/volumes" Dec 04 06:41:38 crc kubenswrapper[4832]: I1204 06:41:38.960250 4832 generic.go:334] "Generic (PLEG): container finished" podID="cbeb7492-95ba-4887-afee-a0fada68f151" containerID="aafe7dc58df11e9f5b06e920bbe65e7d8e3931c3af5b88df7e5ffa25f72b4c6b" exitCode=0 Dec 04 06:41:38 crc kubenswrapper[4832]: I1204 06:41:38.960310 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" event={"ID":"cbeb7492-95ba-4887-afee-a0fada68f151","Type":"ContainerDied","Data":"aafe7dc58df11e9f5b06e920bbe65e7d8e3931c3af5b88df7e5ffa25f72b4c6b"} Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.478494 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.576850 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wch9\" (UniqueName: \"kubernetes.io/projected/cbeb7492-95ba-4887-afee-a0fada68f151-kube-api-access-9wch9\") pod \"cbeb7492-95ba-4887-afee-a0fada68f151\" (UID: \"cbeb7492-95ba-4887-afee-a0fada68f151\") " Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.576904 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbeb7492-95ba-4887-afee-a0fada68f151-ssh-key\") pod \"cbeb7492-95ba-4887-afee-a0fada68f151\" (UID: \"cbeb7492-95ba-4887-afee-a0fada68f151\") " Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.577251 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbeb7492-95ba-4887-afee-a0fada68f151-inventory\") pod \"cbeb7492-95ba-4887-afee-a0fada68f151\" (UID: \"cbeb7492-95ba-4887-afee-a0fada68f151\") " Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.584552 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbeb7492-95ba-4887-afee-a0fada68f151-kube-api-access-9wch9" (OuterVolumeSpecName: "kube-api-access-9wch9") pod "cbeb7492-95ba-4887-afee-a0fada68f151" (UID: "cbeb7492-95ba-4887-afee-a0fada68f151"). InnerVolumeSpecName "kube-api-access-9wch9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.610486 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbeb7492-95ba-4887-afee-a0fada68f151-inventory" (OuterVolumeSpecName: "inventory") pod "cbeb7492-95ba-4887-afee-a0fada68f151" (UID: "cbeb7492-95ba-4887-afee-a0fada68f151"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.613858 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbeb7492-95ba-4887-afee-a0fada68f151-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cbeb7492-95ba-4887-afee-a0fada68f151" (UID: "cbeb7492-95ba-4887-afee-a0fada68f151"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.681098 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbeb7492-95ba-4887-afee-a0fada68f151-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.681194 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wch9\" (UniqueName: \"kubernetes.io/projected/cbeb7492-95ba-4887-afee-a0fada68f151-kube-api-access-9wch9\") on node \"crc\" DevicePath \"\"" Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.681218 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbeb7492-95ba-4887-afee-a0fada68f151-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.993520 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" event={"ID":"cbeb7492-95ba-4887-afee-a0fada68f151","Type":"ContainerDied","Data":"798cc9a92c039cd8ba563aa594468e5b9c25ea1547cdd6102f5b9b8fbb501e5c"} Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.993603 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-95gcb" Dec 04 06:41:40 crc kubenswrapper[4832]: I1204 06:41:40.993616 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798cc9a92c039cd8ba563aa594468e5b9c25ea1547cdd6102f5b9b8fbb501e5c" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.086715 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc"] Dec 04 06:41:41 crc kubenswrapper[4832]: E1204 06:41:41.087491 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbeb7492-95ba-4887-afee-a0fada68f151" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.087516 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbeb7492-95ba-4887-afee-a0fada68f151" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.087768 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbeb7492-95ba-4887-afee-a0fada68f151" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.088758 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.091415 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.091778 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.093890 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.094173 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.097987 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc"] Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.194071 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98e6fcf2-9409-4e06-846b-d96d4106e2b8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc\" (UID: \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.194198 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggmjr\" (UniqueName: \"kubernetes.io/projected/98e6fcf2-9409-4e06-846b-d96d4106e2b8-kube-api-access-ggmjr\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc\" (UID: \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.194247 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98e6fcf2-9409-4e06-846b-d96d4106e2b8-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc\" (UID: \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.296695 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98e6fcf2-9409-4e06-846b-d96d4106e2b8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc\" (UID: \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.296940 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggmjr\" (UniqueName: \"kubernetes.io/projected/98e6fcf2-9409-4e06-846b-d96d4106e2b8-kube-api-access-ggmjr\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc\" (UID: \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.297012 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98e6fcf2-9409-4e06-846b-d96d4106e2b8-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc\" (UID: \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.304318 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98e6fcf2-9409-4e06-846b-d96d4106e2b8-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc\" (UID: \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.304557 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98e6fcf2-9409-4e06-846b-d96d4106e2b8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc\" (UID: \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.320796 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggmjr\" (UniqueName: \"kubernetes.io/projected/98e6fcf2-9409-4e06-846b-d96d4106e2b8-kube-api-access-ggmjr\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc\" (UID: \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:41 crc kubenswrapper[4832]: I1204 06:41:41.416108 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:42 crc kubenswrapper[4832]: I1204 06:41:42.023527 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc"] Dec 04 06:41:42 crc kubenswrapper[4832]: W1204 06:41:42.032849 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98e6fcf2_9409_4e06_846b_d96d4106e2b8.slice/crio-704ea95135a4b981d3f099d82ad1d9bd4670bdc021dca05074e44d93aa27cad5 WatchSource:0}: Error finding container 704ea95135a4b981d3f099d82ad1d9bd4670bdc021dca05074e44d93aa27cad5: Status 404 returned error can't find the container with id 704ea95135a4b981d3f099d82ad1d9bd4670bdc021dca05074e44d93aa27cad5 Dec 04 06:41:43 crc kubenswrapper[4832]: I1204 06:41:43.025222 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" event={"ID":"98e6fcf2-9409-4e06-846b-d96d4106e2b8","Type":"ContainerStarted","Data":"02c4c9c3d5cbed6f4152f872bd2f0986970d6484ca21fc288d27d1a67f41c340"} Dec 04 06:41:43 crc kubenswrapper[4832]: I1204 06:41:43.026540 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" event={"ID":"98e6fcf2-9409-4e06-846b-d96d4106e2b8","Type":"ContainerStarted","Data":"704ea95135a4b981d3f099d82ad1d9bd4670bdc021dca05074e44d93aa27cad5"} Dec 04 06:41:43 crc kubenswrapper[4832]: I1204 06:41:43.056328 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" podStartSLOduration=1.875981122 podStartE2EDuration="2.056298742s" podCreationTimestamp="2025-12-04 06:41:41 +0000 UTC" firstStartedPulling="2025-12-04 06:41:42.036233117 +0000 UTC m=+1957.649050833" lastFinishedPulling="2025-12-04 06:41:42.216550737 +0000 UTC m=+1957.829368453" observedRunningTime="2025-12-04 06:41:43.052169951 +0000 UTC m=+1958.664987677" watchObservedRunningTime="2025-12-04 06:41:43.056298742 +0000 UTC m=+1958.669116488" Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.147694 4832 generic.go:334] "Generic (PLEG): container finished" podID="98e6fcf2-9409-4e06-846b-d96d4106e2b8" containerID="02c4c9c3d5cbed6f4152f872bd2f0986970d6484ca21fc288d27d1a67f41c340" exitCode=0 Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.148505 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" event={"ID":"98e6fcf2-9409-4e06-846b-d96d4106e2b8","Type":"ContainerDied","Data":"02c4c9c3d5cbed6f4152f872bd2f0986970d6484ca21fc288d27d1a67f41c340"} Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.203567 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q2b8d"] Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.206575 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.222151 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2b8d"] Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.323353 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmp26\" (UniqueName: \"kubernetes.io/projected/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-kube-api-access-wmp26\") pod \"redhat-marketplace-q2b8d\" (UID: \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\") " pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.323581 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-utilities\") pod \"redhat-marketplace-q2b8d\" (UID: \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\") " pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.323683 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-catalog-content\") pod \"redhat-marketplace-q2b8d\" (UID: \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\") " pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.425531 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmp26\" (UniqueName: \"kubernetes.io/projected/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-kube-api-access-wmp26\") pod \"redhat-marketplace-q2b8d\" (UID: \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\") " pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.425678 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-utilities\") pod \"redhat-marketplace-q2b8d\" (UID: \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\") " pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.425740 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-catalog-content\") pod \"redhat-marketplace-q2b8d\" (UID: \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\") " pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.426350 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-catalog-content\") pod \"redhat-marketplace-q2b8d\" (UID: \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\") " pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.426620 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-utilities\") pod \"redhat-marketplace-q2b8d\" (UID: \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\") " pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.450617 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmp26\" (UniqueName: \"kubernetes.io/projected/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-kube-api-access-wmp26\") pod \"redhat-marketplace-q2b8d\" (UID: \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\") " pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:41:52 crc kubenswrapper[4832]: I1204 06:41:52.536772 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:41:53 crc kubenswrapper[4832]: I1204 06:41:53.020116 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2b8d"] Dec 04 06:41:53 crc kubenswrapper[4832]: I1204 06:41:53.161267 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2b8d" event={"ID":"0f9c1072-28b2-42ec-80ab-8b989cfea0dd","Type":"ContainerStarted","Data":"8b27e8e6780b896194d9973647c73a737a63daa6a105bfc8b82aa413b45c0fde"} Dec 04 06:41:53 crc kubenswrapper[4832]: I1204 06:41:53.580415 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:53 crc kubenswrapper[4832]: I1204 06:41:53.654805 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98e6fcf2-9409-4e06-846b-d96d4106e2b8-inventory\") pod \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\" (UID: \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\") " Dec 04 06:41:53 crc kubenswrapper[4832]: I1204 06:41:53.654900 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggmjr\" (UniqueName: \"kubernetes.io/projected/98e6fcf2-9409-4e06-846b-d96d4106e2b8-kube-api-access-ggmjr\") pod \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\" (UID: \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\") " Dec 04 06:41:53 crc kubenswrapper[4832]: I1204 06:41:53.654983 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98e6fcf2-9409-4e06-846b-d96d4106e2b8-ssh-key\") pod \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\" (UID: \"98e6fcf2-9409-4e06-846b-d96d4106e2b8\") " Dec 04 06:41:53 crc kubenswrapper[4832]: I1204 06:41:53.663914 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98e6fcf2-9409-4e06-846b-d96d4106e2b8-kube-api-access-ggmjr" (OuterVolumeSpecName: "kube-api-access-ggmjr") pod "98e6fcf2-9409-4e06-846b-d96d4106e2b8" (UID: "98e6fcf2-9409-4e06-846b-d96d4106e2b8"). InnerVolumeSpecName "kube-api-access-ggmjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:41:53 crc kubenswrapper[4832]: I1204 06:41:53.700737 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98e6fcf2-9409-4e06-846b-d96d4106e2b8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "98e6fcf2-9409-4e06-846b-d96d4106e2b8" (UID: "98e6fcf2-9409-4e06-846b-d96d4106e2b8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:41:53 crc kubenswrapper[4832]: I1204 06:41:53.707683 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98e6fcf2-9409-4e06-846b-d96d4106e2b8-inventory" (OuterVolumeSpecName: "inventory") pod "98e6fcf2-9409-4e06-846b-d96d4106e2b8" (UID: "98e6fcf2-9409-4e06-846b-d96d4106e2b8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:41:53 crc kubenswrapper[4832]: I1204 06:41:53.758276 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98e6fcf2-9409-4e06-846b-d96d4106e2b8-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:41:53 crc kubenswrapper[4832]: I1204 06:41:53.758343 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggmjr\" (UniqueName: \"kubernetes.io/projected/98e6fcf2-9409-4e06-846b-d96d4106e2b8-kube-api-access-ggmjr\") on node \"crc\" DevicePath \"\"" Dec 04 06:41:53 crc kubenswrapper[4832]: I1204 06:41:53.758368 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98e6fcf2-9409-4e06-846b-d96d4106e2b8-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.184344 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" event={"ID":"98e6fcf2-9409-4e06-846b-d96d4106e2b8","Type":"ContainerDied","Data":"704ea95135a4b981d3f099d82ad1d9bd4670bdc021dca05074e44d93aa27cad5"} Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.184418 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="704ea95135a4b981d3f099d82ad1d9bd4670bdc021dca05074e44d93aa27cad5" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.184470 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.186834 4832 generic.go:334] "Generic (PLEG): container finished" podID="0f9c1072-28b2-42ec-80ab-8b989cfea0dd" containerID="364fcd3f83195c6e6744577e3b26002baa4ff6d1be9f987d76bf8970b86cc9f5" exitCode=0 Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.186880 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2b8d" event={"ID":"0f9c1072-28b2-42ec-80ab-8b989cfea0dd","Type":"ContainerDied","Data":"364fcd3f83195c6e6744577e3b26002baa4ff6d1be9f987d76bf8970b86cc9f5"} Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.190294 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.273824 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj"] Dec 04 06:41:54 crc kubenswrapper[4832]: E1204 06:41:54.276092 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e6fcf2-9409-4e06-846b-d96d4106e2b8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.276118 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e6fcf2-9409-4e06-846b-d96d4106e2b8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.276422 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="98e6fcf2-9409-4e06-846b-d96d4106e2b8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.277442 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.280271 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.280444 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.280374 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.280897 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.281091 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.281343 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.281659 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.283319 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.303324 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj"] Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.372663 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.372730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.372772 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.372840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r2xp\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-kube-api-access-9r2xp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.372869 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.372897 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.372928 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.372978 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.373002 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.373021 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.373082 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.373106 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.373133 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.373160 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.474514 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r2xp\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-kube-api-access-9r2xp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.474600 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.474642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.474680 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.474739 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.474770 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.474798 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.474878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.474909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.474942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.474976 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.475024 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.475055 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.475084 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.481376 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.481820 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.482843 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.482993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.483678 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.484955 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.485668 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.485934 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.486005 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.487057 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.488835 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.489947 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.490401 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.496281 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r2xp\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-kube-api-access-9r2xp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:54 crc kubenswrapper[4832]: I1204 06:41:54.599993 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:41:55 crc kubenswrapper[4832]: I1204 06:41:55.185881 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj"] Dec 04 06:41:55 crc kubenswrapper[4832]: I1204 06:41:55.199508 4832 generic.go:334] "Generic (PLEG): container finished" podID="0f9c1072-28b2-42ec-80ab-8b989cfea0dd" containerID="c01c9657c03f8fae0cb53ec2ad900a7069c5fa24384f1f4e0ad68967fb806677" exitCode=0 Dec 04 06:41:55 crc kubenswrapper[4832]: I1204 06:41:55.199583 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2b8d" event={"ID":"0f9c1072-28b2-42ec-80ab-8b989cfea0dd","Type":"ContainerDied","Data":"c01c9657c03f8fae0cb53ec2ad900a7069c5fa24384f1f4e0ad68967fb806677"} Dec 04 06:41:55 crc kubenswrapper[4832]: W1204 06:41:55.236786 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa905232_11b8_4af4_96a8_7a7ef46bf17d.slice/crio-3c39dab1c7bb46afd89113a9ac32b21478f4132633e5d49978e6bf97036bf73c WatchSource:0}: Error finding container 3c39dab1c7bb46afd89113a9ac32b21478f4132633e5d49978e6bf97036bf73c: Status 404 returned error can't find the container with id 3c39dab1c7bb46afd89113a9ac32b21478f4132633e5d49978e6bf97036bf73c Dec 04 06:41:56 crc kubenswrapper[4832]: I1204 06:41:56.225972 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2b8d" event={"ID":"0f9c1072-28b2-42ec-80ab-8b989cfea0dd","Type":"ContainerStarted","Data":"067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1"} Dec 04 06:41:56 crc kubenswrapper[4832]: I1204 06:41:56.229586 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" event={"ID":"fa905232-11b8-4af4-96a8-7a7ef46bf17d","Type":"ContainerStarted","Data":"43ff9f8a3c775ce5c4d930d24fe389c3a4734a4b0e1f98d61aca79361c3b4cc0"} Dec 04 06:41:56 crc kubenswrapper[4832]: I1204 06:41:56.229617 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" event={"ID":"fa905232-11b8-4af4-96a8-7a7ef46bf17d","Type":"ContainerStarted","Data":"3c39dab1c7bb46afd89113a9ac32b21478f4132633e5d49978e6bf97036bf73c"} Dec 04 06:41:56 crc kubenswrapper[4832]: I1204 06:41:56.252296 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q2b8d" podStartSLOduration=2.858572396 podStartE2EDuration="4.252266789s" podCreationTimestamp="2025-12-04 06:41:52 +0000 UTC" firstStartedPulling="2025-12-04 06:41:54.189958567 +0000 UTC m=+1969.802776283" lastFinishedPulling="2025-12-04 06:41:55.58365297 +0000 UTC m=+1971.196470676" observedRunningTime="2025-12-04 06:41:56.245971536 +0000 UTC m=+1971.858789242" watchObservedRunningTime="2025-12-04 06:41:56.252266789 +0000 UTC m=+1971.865084495" Dec 04 06:41:56 crc kubenswrapper[4832]: I1204 06:41:56.279611 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" podStartSLOduration=2.121889444 podStartE2EDuration="2.279584374s" podCreationTimestamp="2025-12-04 06:41:54 +0000 UTC" firstStartedPulling="2025-12-04 06:41:55.240427993 +0000 UTC m=+1970.853245699" lastFinishedPulling="2025-12-04 06:41:55.398122923 +0000 UTC m=+1971.010940629" observedRunningTime="2025-12-04 06:41:56.267892169 +0000 UTC m=+1971.880709875" watchObservedRunningTime="2025-12-04 06:41:56.279584374 +0000 UTC m=+1971.892402070" Dec 04 06:42:02 crc kubenswrapper[4832]: I1204 06:42:02.537778 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:42:02 crc kubenswrapper[4832]: I1204 06:42:02.539112 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:42:02 crc kubenswrapper[4832]: I1204 06:42:02.642450 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:42:03 crc kubenswrapper[4832]: I1204 06:42:03.391699 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:42:03 crc kubenswrapper[4832]: I1204 06:42:03.478165 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2b8d"] Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.336787 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q2b8d" podUID="0f9c1072-28b2-42ec-80ab-8b989cfea0dd" containerName="registry-server" containerID="cri-o://067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1" gracePeriod=2 Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.362838 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.362935 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.362983 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.363816 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f984311e54227f0b4d82b40815aa71ea1d1ea9bcddd7d057924cdb99fbf0789"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.363885 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://8f984311e54227f0b4d82b40815aa71ea1d1ea9bcddd7d057924cdb99fbf0789" gracePeriod=600 Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.901761 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.917262 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-utilities\") pod \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\" (UID: \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\") " Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.917461 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-catalog-content\") pod \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\" (UID: \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\") " Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.917554 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmp26\" (UniqueName: \"kubernetes.io/projected/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-kube-api-access-wmp26\") pod \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\" (UID: \"0f9c1072-28b2-42ec-80ab-8b989cfea0dd\") " Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.921413 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-utilities" (OuterVolumeSpecName: "utilities") pod "0f9c1072-28b2-42ec-80ab-8b989cfea0dd" (UID: "0f9c1072-28b2-42ec-80ab-8b989cfea0dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.929671 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-kube-api-access-wmp26" (OuterVolumeSpecName: "kube-api-access-wmp26") pod "0f9c1072-28b2-42ec-80ab-8b989cfea0dd" (UID: "0f9c1072-28b2-42ec-80ab-8b989cfea0dd"). InnerVolumeSpecName "kube-api-access-wmp26". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:42:05 crc kubenswrapper[4832]: I1204 06:42:05.944635 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f9c1072-28b2-42ec-80ab-8b989cfea0dd" (UID: "0f9c1072-28b2-42ec-80ab-8b989cfea0dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.020589 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmp26\" (UniqueName: \"kubernetes.io/projected/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-kube-api-access-wmp26\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.020642 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.020656 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9c1072-28b2-42ec-80ab-8b989cfea0dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.348674 4832 generic.go:334] "Generic (PLEG): container finished" podID="0f9c1072-28b2-42ec-80ab-8b989cfea0dd" containerID="067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1" exitCode=0 Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.348720 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2b8d" event={"ID":"0f9c1072-28b2-42ec-80ab-8b989cfea0dd","Type":"ContainerDied","Data":"067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1"} Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.349363 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q2b8d" event={"ID":"0f9c1072-28b2-42ec-80ab-8b989cfea0dd","Type":"ContainerDied","Data":"8b27e8e6780b896194d9973647c73a737a63daa6a105bfc8b82aa413b45c0fde"} Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.348733 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q2b8d" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.349406 4832 scope.go:117] "RemoveContainer" containerID="067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.372767 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="8f984311e54227f0b4d82b40815aa71ea1d1ea9bcddd7d057924cdb99fbf0789" exitCode=0 Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.372852 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"8f984311e54227f0b4d82b40815aa71ea1d1ea9bcddd7d057924cdb99fbf0789"} Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.372900 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db"} Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.410663 4832 scope.go:117] "RemoveContainer" containerID="c01c9657c03f8fae0cb53ec2ad900a7069c5fa24384f1f4e0ad68967fb806677" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.413105 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2b8d"] Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.423036 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q2b8d"] Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.434404 4832 scope.go:117] "RemoveContainer" containerID="364fcd3f83195c6e6744577e3b26002baa4ff6d1be9f987d76bf8970b86cc9f5" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.453070 4832 scope.go:117] "RemoveContainer" containerID="067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1" Dec 04 06:42:06 crc kubenswrapper[4832]: E1204 06:42:06.453527 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1\": container with ID starting with 067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1 not found: ID does not exist" containerID="067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.453580 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1"} err="failed to get container status \"067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1\": rpc error: code = NotFound desc = could not find container \"067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1\": container with ID starting with 067cccc63a06d9d8b14aafb6a4ae8070066318aeeb99fc16970e8f97d0f711a1 not found: ID does not exist" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.453615 4832 scope.go:117] "RemoveContainer" containerID="c01c9657c03f8fae0cb53ec2ad900a7069c5fa24384f1f4e0ad68967fb806677" Dec 04 06:42:06 crc kubenswrapper[4832]: E1204 06:42:06.454272 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c01c9657c03f8fae0cb53ec2ad900a7069c5fa24384f1f4e0ad68967fb806677\": container with ID starting with c01c9657c03f8fae0cb53ec2ad900a7069c5fa24384f1f4e0ad68967fb806677 not found: ID does not exist" containerID="c01c9657c03f8fae0cb53ec2ad900a7069c5fa24384f1f4e0ad68967fb806677" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.454356 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c01c9657c03f8fae0cb53ec2ad900a7069c5fa24384f1f4e0ad68967fb806677"} err="failed to get container status \"c01c9657c03f8fae0cb53ec2ad900a7069c5fa24384f1f4e0ad68967fb806677\": rpc error: code = NotFound desc = could not find container \"c01c9657c03f8fae0cb53ec2ad900a7069c5fa24384f1f4e0ad68967fb806677\": container with ID starting with c01c9657c03f8fae0cb53ec2ad900a7069c5fa24384f1f4e0ad68967fb806677 not found: ID does not exist" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.454433 4832 scope.go:117] "RemoveContainer" containerID="364fcd3f83195c6e6744577e3b26002baa4ff6d1be9f987d76bf8970b86cc9f5" Dec 04 06:42:06 crc kubenswrapper[4832]: E1204 06:42:06.454847 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"364fcd3f83195c6e6744577e3b26002baa4ff6d1be9f987d76bf8970b86cc9f5\": container with ID starting with 364fcd3f83195c6e6744577e3b26002baa4ff6d1be9f987d76bf8970b86cc9f5 not found: ID does not exist" containerID="364fcd3f83195c6e6744577e3b26002baa4ff6d1be9f987d76bf8970b86cc9f5" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.454907 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"364fcd3f83195c6e6744577e3b26002baa4ff6d1be9f987d76bf8970b86cc9f5"} err="failed to get container status \"364fcd3f83195c6e6744577e3b26002baa4ff6d1be9f987d76bf8970b86cc9f5\": rpc error: code = NotFound desc = could not find container \"364fcd3f83195c6e6744577e3b26002baa4ff6d1be9f987d76bf8970b86cc9f5\": container with ID starting with 364fcd3f83195c6e6744577e3b26002baa4ff6d1be9f987d76bf8970b86cc9f5 not found: ID does not exist" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.454934 4832 scope.go:117] "RemoveContainer" containerID="19e639a83ea971c415c4b9704144ad1b0e818a2f5e3bd5a13781f0c73c8b17f5" Dec 04 06:42:06 crc kubenswrapper[4832]: I1204 06:42:06.723033 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f9c1072-28b2-42ec-80ab-8b989cfea0dd" path="/var/lib/kubelet/pods/0f9c1072-28b2-42ec-80ab-8b989cfea0dd/volumes" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.518702 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lh5r4"] Dec 04 06:42:12 crc kubenswrapper[4832]: E1204 06:42:12.520586 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9c1072-28b2-42ec-80ab-8b989cfea0dd" containerName="registry-server" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.520612 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9c1072-28b2-42ec-80ab-8b989cfea0dd" containerName="registry-server" Dec 04 06:42:12 crc kubenswrapper[4832]: E1204 06:42:12.520654 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9c1072-28b2-42ec-80ab-8b989cfea0dd" containerName="extract-utilities" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.520672 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9c1072-28b2-42ec-80ab-8b989cfea0dd" containerName="extract-utilities" Dec 04 06:42:12 crc kubenswrapper[4832]: E1204 06:42:12.520714 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9c1072-28b2-42ec-80ab-8b989cfea0dd" containerName="extract-content" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.520730 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9c1072-28b2-42ec-80ab-8b989cfea0dd" containerName="extract-content" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.521127 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f9c1072-28b2-42ec-80ab-8b989cfea0dd" containerName="registry-server" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.524204 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.531052 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lh5r4"] Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.676054 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eadce47-e138-478b-8dcf-ea557cbf4e18-utilities\") pod \"redhat-operators-lh5r4\" (UID: \"1eadce47-e138-478b-8dcf-ea557cbf4e18\") " pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.676151 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv9hm\" (UniqueName: \"kubernetes.io/projected/1eadce47-e138-478b-8dcf-ea557cbf4e18-kube-api-access-hv9hm\") pod \"redhat-operators-lh5r4\" (UID: \"1eadce47-e138-478b-8dcf-ea557cbf4e18\") " pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.676225 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eadce47-e138-478b-8dcf-ea557cbf4e18-catalog-content\") pod \"redhat-operators-lh5r4\" (UID: \"1eadce47-e138-478b-8dcf-ea557cbf4e18\") " pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.778492 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eadce47-e138-478b-8dcf-ea557cbf4e18-utilities\") pod \"redhat-operators-lh5r4\" (UID: \"1eadce47-e138-478b-8dcf-ea557cbf4e18\") " pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.778614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv9hm\" (UniqueName: \"kubernetes.io/projected/1eadce47-e138-478b-8dcf-ea557cbf4e18-kube-api-access-hv9hm\") pod \"redhat-operators-lh5r4\" (UID: \"1eadce47-e138-478b-8dcf-ea557cbf4e18\") " pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.778697 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eadce47-e138-478b-8dcf-ea557cbf4e18-catalog-content\") pod \"redhat-operators-lh5r4\" (UID: \"1eadce47-e138-478b-8dcf-ea557cbf4e18\") " pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.779283 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eadce47-e138-478b-8dcf-ea557cbf4e18-catalog-content\") pod \"redhat-operators-lh5r4\" (UID: \"1eadce47-e138-478b-8dcf-ea557cbf4e18\") " pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.779320 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eadce47-e138-478b-8dcf-ea557cbf4e18-utilities\") pod \"redhat-operators-lh5r4\" (UID: \"1eadce47-e138-478b-8dcf-ea557cbf4e18\") " pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.805316 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv9hm\" (UniqueName: \"kubernetes.io/projected/1eadce47-e138-478b-8dcf-ea557cbf4e18-kube-api-access-hv9hm\") pod \"redhat-operators-lh5r4\" (UID: \"1eadce47-e138-478b-8dcf-ea557cbf4e18\") " pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:12 crc kubenswrapper[4832]: I1204 06:42:12.855948 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:13 crc kubenswrapper[4832]: I1204 06:42:13.370757 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lh5r4"] Dec 04 06:42:13 crc kubenswrapper[4832]: I1204 06:42:13.459277 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lh5r4" event={"ID":"1eadce47-e138-478b-8dcf-ea557cbf4e18","Type":"ContainerStarted","Data":"2d0cee9c9ca6c6ade6638def8e7d65ab1719776fde1b888f4e8c89986fa30473"} Dec 04 06:42:14 crc kubenswrapper[4832]: I1204 06:42:14.475284 4832 generic.go:334] "Generic (PLEG): container finished" podID="1eadce47-e138-478b-8dcf-ea557cbf4e18" containerID="ca84b645ab752efcb7a2c15e87861e9f269adf17e771c3ad76ff274be058fc8d" exitCode=0 Dec 04 06:42:14 crc kubenswrapper[4832]: I1204 06:42:14.475429 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lh5r4" event={"ID":"1eadce47-e138-478b-8dcf-ea557cbf4e18","Type":"ContainerDied","Data":"ca84b645ab752efcb7a2c15e87861e9f269adf17e771c3ad76ff274be058fc8d"} Dec 04 06:42:20 crc kubenswrapper[4832]: I1204 06:42:20.556051 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lh5r4" event={"ID":"1eadce47-e138-478b-8dcf-ea557cbf4e18","Type":"ContainerStarted","Data":"0b2d13144733e6a37d01a7c35a2f016f0e4bf4d303b0bdaeb825949b8c20db2d"} Dec 04 06:42:21 crc kubenswrapper[4832]: I1204 06:42:21.199085 4832 scope.go:117] "RemoveContainer" containerID="660f41e3e3b57c5933b93313c83781f543055c17d03cbc21f06f8d67de1e6553" Dec 04 06:42:22 crc kubenswrapper[4832]: I1204 06:42:22.582722 4832 generic.go:334] "Generic (PLEG): container finished" podID="1eadce47-e138-478b-8dcf-ea557cbf4e18" containerID="0b2d13144733e6a37d01a7c35a2f016f0e4bf4d303b0bdaeb825949b8c20db2d" exitCode=0 Dec 04 06:42:22 crc kubenswrapper[4832]: I1204 06:42:22.582782 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lh5r4" event={"ID":"1eadce47-e138-478b-8dcf-ea557cbf4e18","Type":"ContainerDied","Data":"0b2d13144733e6a37d01a7c35a2f016f0e4bf4d303b0bdaeb825949b8c20db2d"} Dec 04 06:42:23 crc kubenswrapper[4832]: I1204 06:42:23.594579 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lh5r4" event={"ID":"1eadce47-e138-478b-8dcf-ea557cbf4e18","Type":"ContainerStarted","Data":"78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1"} Dec 04 06:42:23 crc kubenswrapper[4832]: I1204 06:42:23.625851 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lh5r4" podStartSLOduration=3.058630293 podStartE2EDuration="11.625831483s" podCreationTimestamp="2025-12-04 06:42:12 +0000 UTC" firstStartedPulling="2025-12-04 06:42:14.478311983 +0000 UTC m=+1990.091129689" lastFinishedPulling="2025-12-04 06:42:23.045513173 +0000 UTC m=+1998.658330879" observedRunningTime="2025-12-04 06:42:23.616552446 +0000 UTC m=+1999.229370162" watchObservedRunningTime="2025-12-04 06:42:23.625831483 +0000 UTC m=+1999.238649189" Dec 04 06:42:32 crc kubenswrapper[4832]: I1204 06:42:32.683113 4832 generic.go:334] "Generic (PLEG): container finished" podID="fa905232-11b8-4af4-96a8-7a7ef46bf17d" containerID="43ff9f8a3c775ce5c4d930d24fe389c3a4734a4b0e1f98d61aca79361c3b4cc0" exitCode=0 Dec 04 06:42:32 crc kubenswrapper[4832]: I1204 06:42:32.683193 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" event={"ID":"fa905232-11b8-4af4-96a8-7a7ef46bf17d","Type":"ContainerDied","Data":"43ff9f8a3c775ce5c4d930d24fe389c3a4734a4b0e1f98d61aca79361c3b4cc0"} Dec 04 06:42:32 crc kubenswrapper[4832]: I1204 06:42:32.856987 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:32 crc kubenswrapper[4832]: I1204 06:42:32.857038 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:32 crc kubenswrapper[4832]: I1204 06:42:32.933734 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:33 crc kubenswrapper[4832]: I1204 06:42:33.768776 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:33 crc kubenswrapper[4832]: I1204 06:42:33.836549 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lh5r4"] Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.171763 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.237879 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-inventory\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.237932 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.238018 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.238069 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-bootstrap-combined-ca-bundle\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.238096 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-neutron-metadata-combined-ca-bundle\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.238130 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-libvirt-combined-ca-bundle\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.238152 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r2xp\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-kube-api-access-9r2xp\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.238176 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-ovn-combined-ca-bundle\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.238212 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.238269 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-ssh-key\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.238318 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-repo-setup-combined-ca-bundle\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.238343 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-nova-combined-ca-bundle\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.238365 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.238461 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-telemetry-combined-ca-bundle\") pod \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\" (UID: \"fa905232-11b8-4af4-96a8-7a7ef46bf17d\") " Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.246123 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.246145 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.246458 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.247863 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.247940 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.248089 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.248149 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-kube-api-access-9r2xp" (OuterVolumeSpecName: "kube-api-access-9r2xp") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "kube-api-access-9r2xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.248175 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.251225 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.251807 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.251836 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.259799 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.277308 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.287258 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-inventory" (OuterVolumeSpecName: "inventory") pod "fa905232-11b8-4af4-96a8-7a7ef46bf17d" (UID: "fa905232-11b8-4af4-96a8-7a7ef46bf17d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342021 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342084 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342110 4832 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342136 4832 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342160 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342182 4832 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342202 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342222 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342246 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342267 4832 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342288 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342308 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342328 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r2xp\" (UniqueName: \"kubernetes.io/projected/fa905232-11b8-4af4-96a8-7a7ef46bf17d-kube-api-access-9r2xp\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.342349 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa905232-11b8-4af4-96a8-7a7ef46bf17d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.707238 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" event={"ID":"fa905232-11b8-4af4-96a8-7a7ef46bf17d","Type":"ContainerDied","Data":"3c39dab1c7bb46afd89113a9ac32b21478f4132633e5d49978e6bf97036bf73c"} Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.707292 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c39dab1c7bb46afd89113a9ac32b21478f4132633e5d49978e6bf97036bf73c" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.707311 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.832799 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg"] Dec 04 06:42:34 crc kubenswrapper[4832]: E1204 06:42:34.835184 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa905232-11b8-4af4-96a8-7a7ef46bf17d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.835742 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa905232-11b8-4af4-96a8-7a7ef46bf17d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.836137 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa905232-11b8-4af4-96a8-7a7ef46bf17d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.837376 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.841252 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.841923 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.841991 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.842334 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.842344 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.844750 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg"] Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.961267 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.961361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.961566 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m8zb\" (UniqueName: \"kubernetes.io/projected/7cf92a16-6e70-4b63-a14e-30a5b041a80f-kube-api-access-2m8zb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.961608 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:34 crc kubenswrapper[4832]: I1204 06:42:34.961628 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.064037 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.064163 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.064238 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m8zb\" (UniqueName: \"kubernetes.io/projected/7cf92a16-6e70-4b63-a14e-30a5b041a80f-kube-api-access-2m8zb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.064283 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.064312 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.065152 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.069282 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.069687 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.072651 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.094204 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m8zb\" (UniqueName: \"kubernetes.io/projected/7cf92a16-6e70-4b63-a14e-30a5b041a80f-kube-api-access-2m8zb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9j4xg\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.175871 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.716855 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lh5r4" podUID="1eadce47-e138-478b-8dcf-ea557cbf4e18" containerName="registry-server" containerID="cri-o://78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1" gracePeriod=2 Dec 04 06:42:35 crc kubenswrapper[4832]: I1204 06:42:35.737782 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg"] Dec 04 06:42:35 crc kubenswrapper[4832]: W1204 06:42:35.737933 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cf92a16_6e70_4b63_a14e_30a5b041a80f.slice/crio-c0427b25e560a00a17c5819117a583d17a6a6ee2b59d9ae9d0061f99a46f09a4 WatchSource:0}: Error finding container c0427b25e560a00a17c5819117a583d17a6a6ee2b59d9ae9d0061f99a46f09a4: Status 404 returned error can't find the container with id c0427b25e560a00a17c5819117a583d17a6a6ee2b59d9ae9d0061f99a46f09a4 Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.207880 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.286653 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv9hm\" (UniqueName: \"kubernetes.io/projected/1eadce47-e138-478b-8dcf-ea557cbf4e18-kube-api-access-hv9hm\") pod \"1eadce47-e138-478b-8dcf-ea557cbf4e18\" (UID: \"1eadce47-e138-478b-8dcf-ea557cbf4e18\") " Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.287757 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eadce47-e138-478b-8dcf-ea557cbf4e18-utilities\") pod \"1eadce47-e138-478b-8dcf-ea557cbf4e18\" (UID: \"1eadce47-e138-478b-8dcf-ea557cbf4e18\") " Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.288582 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eadce47-e138-478b-8dcf-ea557cbf4e18-utilities" (OuterVolumeSpecName: "utilities") pod "1eadce47-e138-478b-8dcf-ea557cbf4e18" (UID: "1eadce47-e138-478b-8dcf-ea557cbf4e18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.288770 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eadce47-e138-478b-8dcf-ea557cbf4e18-catalog-content\") pod \"1eadce47-e138-478b-8dcf-ea557cbf4e18\" (UID: \"1eadce47-e138-478b-8dcf-ea557cbf4e18\") " Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.292550 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eadce47-e138-478b-8dcf-ea557cbf4e18-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.301520 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eadce47-e138-478b-8dcf-ea557cbf4e18-kube-api-access-hv9hm" (OuterVolumeSpecName: "kube-api-access-hv9hm") pod "1eadce47-e138-478b-8dcf-ea557cbf4e18" (UID: "1eadce47-e138-478b-8dcf-ea557cbf4e18"). InnerVolumeSpecName "kube-api-access-hv9hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.394232 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv9hm\" (UniqueName: \"kubernetes.io/projected/1eadce47-e138-478b-8dcf-ea557cbf4e18-kube-api-access-hv9hm\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.399623 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eadce47-e138-478b-8dcf-ea557cbf4e18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1eadce47-e138-478b-8dcf-ea557cbf4e18" (UID: "1eadce47-e138-478b-8dcf-ea557cbf4e18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.495630 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eadce47-e138-478b-8dcf-ea557cbf4e18-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.731193 4832 generic.go:334] "Generic (PLEG): container finished" podID="1eadce47-e138-478b-8dcf-ea557cbf4e18" containerID="78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1" exitCode=0 Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.731346 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lh5r4" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.755183 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lh5r4" event={"ID":"1eadce47-e138-478b-8dcf-ea557cbf4e18","Type":"ContainerDied","Data":"78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1"} Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.755240 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lh5r4" event={"ID":"1eadce47-e138-478b-8dcf-ea557cbf4e18","Type":"ContainerDied","Data":"2d0cee9c9ca6c6ade6638def8e7d65ab1719776fde1b888f4e8c89986fa30473"} Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.755251 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" event={"ID":"7cf92a16-6e70-4b63-a14e-30a5b041a80f","Type":"ContainerStarted","Data":"4652e0af38c2335b8bd5c3a28f03cb60dae146166ff01b1ca476c8f4ba34ac6b"} Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.755266 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" event={"ID":"7cf92a16-6e70-4b63-a14e-30a5b041a80f","Type":"ContainerStarted","Data":"c0427b25e560a00a17c5819117a583d17a6a6ee2b59d9ae9d0061f99a46f09a4"} Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.755292 4832 scope.go:117] "RemoveContainer" containerID="78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.759321 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" podStartSLOduration=2.542118985 podStartE2EDuration="2.759298851s" podCreationTimestamp="2025-12-04 06:42:34 +0000 UTC" firstStartedPulling="2025-12-04 06:42:35.741301932 +0000 UTC m=+2011.354119638" lastFinishedPulling="2025-12-04 06:42:35.958481798 +0000 UTC m=+2011.571299504" observedRunningTime="2025-12-04 06:42:36.753316754 +0000 UTC m=+2012.366134530" watchObservedRunningTime="2025-12-04 06:42:36.759298851 +0000 UTC m=+2012.372116557" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.786879 4832 scope.go:117] "RemoveContainer" containerID="0b2d13144733e6a37d01a7c35a2f016f0e4bf4d303b0bdaeb825949b8c20db2d" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.788036 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lh5r4"] Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.798855 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lh5r4"] Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.812325 4832 scope.go:117] "RemoveContainer" containerID="ca84b645ab752efcb7a2c15e87861e9f269adf17e771c3ad76ff274be058fc8d" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.856616 4832 scope.go:117] "RemoveContainer" containerID="78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1" Dec 04 06:42:36 crc kubenswrapper[4832]: E1204 06:42:36.857201 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1\": container with ID starting with 78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1 not found: ID does not exist" containerID="78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.857236 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1"} err="failed to get container status \"78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1\": rpc error: code = NotFound desc = could not find container \"78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1\": container with ID starting with 78adb9127e1e700b775c4ee327a861e1c39486f7d2084521b6cad1ba3285ebe1 not found: ID does not exist" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.857276 4832 scope.go:117] "RemoveContainer" containerID="0b2d13144733e6a37d01a7c35a2f016f0e4bf4d303b0bdaeb825949b8c20db2d" Dec 04 06:42:36 crc kubenswrapper[4832]: E1204 06:42:36.857799 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2d13144733e6a37d01a7c35a2f016f0e4bf4d303b0bdaeb825949b8c20db2d\": container with ID starting with 0b2d13144733e6a37d01a7c35a2f016f0e4bf4d303b0bdaeb825949b8c20db2d not found: ID does not exist" containerID="0b2d13144733e6a37d01a7c35a2f016f0e4bf4d303b0bdaeb825949b8c20db2d" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.857842 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2d13144733e6a37d01a7c35a2f016f0e4bf4d303b0bdaeb825949b8c20db2d"} err="failed to get container status \"0b2d13144733e6a37d01a7c35a2f016f0e4bf4d303b0bdaeb825949b8c20db2d\": rpc error: code = NotFound desc = could not find container \"0b2d13144733e6a37d01a7c35a2f016f0e4bf4d303b0bdaeb825949b8c20db2d\": container with ID starting with 0b2d13144733e6a37d01a7c35a2f016f0e4bf4d303b0bdaeb825949b8c20db2d not found: ID does not exist" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.857857 4832 scope.go:117] "RemoveContainer" containerID="ca84b645ab752efcb7a2c15e87861e9f269adf17e771c3ad76ff274be058fc8d" Dec 04 06:42:36 crc kubenswrapper[4832]: E1204 06:42:36.859364 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca84b645ab752efcb7a2c15e87861e9f269adf17e771c3ad76ff274be058fc8d\": container with ID starting with ca84b645ab752efcb7a2c15e87861e9f269adf17e771c3ad76ff274be058fc8d not found: ID does not exist" containerID="ca84b645ab752efcb7a2c15e87861e9f269adf17e771c3ad76ff274be058fc8d" Dec 04 06:42:36 crc kubenswrapper[4832]: I1204 06:42:36.859487 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca84b645ab752efcb7a2c15e87861e9f269adf17e771c3ad76ff274be058fc8d"} err="failed to get container status \"ca84b645ab752efcb7a2c15e87861e9f269adf17e771c3ad76ff274be058fc8d\": rpc error: code = NotFound desc = could not find container \"ca84b645ab752efcb7a2c15e87861e9f269adf17e771c3ad76ff274be058fc8d\": container with ID starting with ca84b645ab752efcb7a2c15e87861e9f269adf17e771c3ad76ff274be058fc8d not found: ID does not exist" Dec 04 06:42:38 crc kubenswrapper[4832]: I1204 06:42:38.721699 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eadce47-e138-478b-8dcf-ea557cbf4e18" path="/var/lib/kubelet/pods/1eadce47-e138-478b-8dcf-ea557cbf4e18/volumes" Dec 04 06:43:35 crc kubenswrapper[4832]: I1204 06:43:35.371726 4832 generic.go:334] "Generic (PLEG): container finished" podID="7cf92a16-6e70-4b63-a14e-30a5b041a80f" containerID="4652e0af38c2335b8bd5c3a28f03cb60dae146166ff01b1ca476c8f4ba34ac6b" exitCode=0 Dec 04 06:43:35 crc kubenswrapper[4832]: I1204 06:43:35.371830 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" event={"ID":"7cf92a16-6e70-4b63-a14e-30a5b041a80f","Type":"ContainerDied","Data":"4652e0af38c2335b8bd5c3a28f03cb60dae146166ff01b1ca476c8f4ba34ac6b"} Dec 04 06:43:36 crc kubenswrapper[4832]: I1204 06:43:36.847755 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:43:36 crc kubenswrapper[4832]: I1204 06:43:36.995328 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ovn-combined-ca-bundle\") pod \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " Dec 04 06:43:36 crc kubenswrapper[4832]: I1204 06:43:36.995515 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ssh-key\") pod \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " Dec 04 06:43:36 crc kubenswrapper[4832]: I1204 06:43:36.995762 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ovncontroller-config-0\") pod \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " Dec 04 06:43:36 crc kubenswrapper[4832]: I1204 06:43:36.995860 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m8zb\" (UniqueName: \"kubernetes.io/projected/7cf92a16-6e70-4b63-a14e-30a5b041a80f-kube-api-access-2m8zb\") pod \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " Dec 04 06:43:36 crc kubenswrapper[4832]: I1204 06:43:36.995886 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-inventory\") pod \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\" (UID: \"7cf92a16-6e70-4b63-a14e-30a5b041a80f\") " Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.003322 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf92a16-6e70-4b63-a14e-30a5b041a80f-kube-api-access-2m8zb" (OuterVolumeSpecName: "kube-api-access-2m8zb") pod "7cf92a16-6e70-4b63-a14e-30a5b041a80f" (UID: "7cf92a16-6e70-4b63-a14e-30a5b041a80f"). InnerVolumeSpecName "kube-api-access-2m8zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.004992 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7cf92a16-6e70-4b63-a14e-30a5b041a80f" (UID: "7cf92a16-6e70-4b63-a14e-30a5b041a80f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.028346 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-inventory" (OuterVolumeSpecName: "inventory") pod "7cf92a16-6e70-4b63-a14e-30a5b041a80f" (UID: "7cf92a16-6e70-4b63-a14e-30a5b041a80f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.029138 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7cf92a16-6e70-4b63-a14e-30a5b041a80f" (UID: "7cf92a16-6e70-4b63-a14e-30a5b041a80f"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.032917 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7cf92a16-6e70-4b63-a14e-30a5b041a80f" (UID: "7cf92a16-6e70-4b63-a14e-30a5b041a80f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.099204 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.099252 4832 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.099269 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m8zb\" (UniqueName: \"kubernetes.io/projected/7cf92a16-6e70-4b63-a14e-30a5b041a80f-kube-api-access-2m8zb\") on node \"crc\" DevicePath \"\"" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.099287 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.099309 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf92a16-6e70-4b63-a14e-30a5b041a80f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.399970 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" event={"ID":"7cf92a16-6e70-4b63-a14e-30a5b041a80f","Type":"ContainerDied","Data":"c0427b25e560a00a17c5819117a583d17a6a6ee2b59d9ae9d0061f99a46f09a4"} Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.400016 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0427b25e560a00a17c5819117a583d17a6a6ee2b59d9ae9d0061f99a46f09a4" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.400090 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9j4xg" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.518223 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx"] Dec 04 06:43:37 crc kubenswrapper[4832]: E1204 06:43:37.519336 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eadce47-e138-478b-8dcf-ea557cbf4e18" containerName="extract-content" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.519458 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eadce47-e138-478b-8dcf-ea557cbf4e18" containerName="extract-content" Dec 04 06:43:37 crc kubenswrapper[4832]: E1204 06:43:37.519581 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eadce47-e138-478b-8dcf-ea557cbf4e18" containerName="registry-server" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.519671 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eadce47-e138-478b-8dcf-ea557cbf4e18" containerName="registry-server" Dec 04 06:43:37 crc kubenswrapper[4832]: E1204 06:43:37.519777 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf92a16-6e70-4b63-a14e-30a5b041a80f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.519851 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf92a16-6e70-4b63-a14e-30a5b041a80f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 04 06:43:37 crc kubenswrapper[4832]: E1204 06:43:37.519930 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eadce47-e138-478b-8dcf-ea557cbf4e18" containerName="extract-utilities" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.520002 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eadce47-e138-478b-8dcf-ea557cbf4e18" containerName="extract-utilities" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.520335 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eadce47-e138-478b-8dcf-ea557cbf4e18" containerName="registry-server" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.520478 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf92a16-6e70-4b63-a14e-30a5b041a80f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.521659 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.528157 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.528199 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.528159 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.528157 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.528757 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.528833 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.532120 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx"] Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.622607 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.622665 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.622705 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.622771 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5sdn\" (UniqueName: \"kubernetes.io/projected/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-kube-api-access-c5sdn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.622809 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.622835 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.725145 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.725229 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5sdn\" (UniqueName: \"kubernetes.io/projected/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-kube-api-access-c5sdn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.726382 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.726452 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.729345 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.729431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.732346 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.732728 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.733219 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.734413 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.735591 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.744573 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5sdn\" (UniqueName: \"kubernetes.io/projected/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-kube-api-access-c5sdn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:37 crc kubenswrapper[4832]: I1204 06:43:37.848086 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:43:38 crc kubenswrapper[4832]: I1204 06:43:38.463690 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx"] Dec 04 06:43:39 crc kubenswrapper[4832]: I1204 06:43:39.420983 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" event={"ID":"27b9693c-3bb0-4819-bbcf-87634b6bb8e3","Type":"ContainerStarted","Data":"ea6b2e8c83faa14a27c110453ff1c6228051cedaa310600d46711a5fb37b01b5"} Dec 04 06:43:39 crc kubenswrapper[4832]: I1204 06:43:39.421422 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" event={"ID":"27b9693c-3bb0-4819-bbcf-87634b6bb8e3","Type":"ContainerStarted","Data":"c230eef02289150c4e987f815f1051e01187c880ac9df1315e8532377db7f077"} Dec 04 06:44:05 crc kubenswrapper[4832]: I1204 06:44:05.362632 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:44:05 crc kubenswrapper[4832]: I1204 06:44:05.363552 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:44:25 crc kubenswrapper[4832]: I1204 06:44:25.929930 4832 generic.go:334] "Generic (PLEG): container finished" podID="27b9693c-3bb0-4819-bbcf-87634b6bb8e3" containerID="ea6b2e8c83faa14a27c110453ff1c6228051cedaa310600d46711a5fb37b01b5" exitCode=0 Dec 04 06:44:25 crc kubenswrapper[4832]: I1204 06:44:25.930003 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" event={"ID":"27b9693c-3bb0-4819-bbcf-87634b6bb8e3","Type":"ContainerDied","Data":"ea6b2e8c83faa14a27c110453ff1c6228051cedaa310600d46711a5fb37b01b5"} Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.382198 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.558239 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-inventory\") pod \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.558473 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-ssh-key\") pod \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.558546 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-neutron-metadata-combined-ca-bundle\") pod \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.558705 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.558799 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5sdn\" (UniqueName: \"kubernetes.io/projected/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-kube-api-access-c5sdn\") pod \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.558848 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-nova-metadata-neutron-config-0\") pod \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\" (UID: \"27b9693c-3bb0-4819-bbcf-87634b6bb8e3\") " Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.574827 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-kube-api-access-c5sdn" (OuterVolumeSpecName: "kube-api-access-c5sdn") pod "27b9693c-3bb0-4819-bbcf-87634b6bb8e3" (UID: "27b9693c-3bb0-4819-bbcf-87634b6bb8e3"). InnerVolumeSpecName "kube-api-access-c5sdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.576594 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "27b9693c-3bb0-4819-bbcf-87634b6bb8e3" (UID: "27b9693c-3bb0-4819-bbcf-87634b6bb8e3"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.589089 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-inventory" (OuterVolumeSpecName: "inventory") pod "27b9693c-3bb0-4819-bbcf-87634b6bb8e3" (UID: "27b9693c-3bb0-4819-bbcf-87634b6bb8e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.592838 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "27b9693c-3bb0-4819-bbcf-87634b6bb8e3" (UID: "27b9693c-3bb0-4819-bbcf-87634b6bb8e3"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.593560 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "27b9693c-3bb0-4819-bbcf-87634b6bb8e3" (UID: "27b9693c-3bb0-4819-bbcf-87634b6bb8e3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.604717 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "27b9693c-3bb0-4819-bbcf-87634b6bb8e3" (UID: "27b9693c-3bb0-4819-bbcf-87634b6bb8e3"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.663201 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.663631 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5sdn\" (UniqueName: \"kubernetes.io/projected/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-kube-api-access-c5sdn\") on node \"crc\" DevicePath \"\"" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.663738 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.663800 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.663863 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.663991 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b9693c-3bb0-4819-bbcf-87634b6bb8e3-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.956032 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" event={"ID":"27b9693c-3bb0-4819-bbcf-87634b6bb8e3","Type":"ContainerDied","Data":"c230eef02289150c4e987f815f1051e01187c880ac9df1315e8532377db7f077"} Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.956077 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c230eef02289150c4e987f815f1051e01187c880ac9df1315e8532377db7f077" Dec 04 06:44:27 crc kubenswrapper[4832]: I1204 06:44:27.956108 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.073756 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2"] Dec 04 06:44:28 crc kubenswrapper[4832]: E1204 06:44:28.074348 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b9693c-3bb0-4819-bbcf-87634b6bb8e3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.074378 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b9693c-3bb0-4819-bbcf-87634b6bb8e3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.074649 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="27b9693c-3bb0-4819-bbcf-87634b6bb8e3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.075505 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.078267 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.078682 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.078694 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.078873 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.079115 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.092773 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2"] Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.176642 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.177437 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.177566 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.177925 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g4v8\" (UniqueName: \"kubernetes.io/projected/9e2d5cc2-a6c8-4953-8c34-650c047c5848-kube-api-access-7g4v8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.178076 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: E1204 06:44:28.200726 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27b9693c_3bb0_4819_bbcf_87634b6bb8e3.slice/crio-c230eef02289150c4e987f815f1051e01187c880ac9df1315e8532377db7f077\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27b9693c_3bb0_4819_bbcf_87634b6bb8e3.slice\": RecentStats: unable to find data in memory cache]" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.279074 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.279178 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.279247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.279281 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.279361 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g4v8\" (UniqueName: \"kubernetes.io/projected/9e2d5cc2-a6c8-4953-8c34-650c047c5848-kube-api-access-7g4v8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.286366 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.286739 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.287438 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.290557 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.305108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g4v8\" (UniqueName: \"kubernetes.io/projected/9e2d5cc2-a6c8-4953-8c34-650c047c5848-kube-api-access-7g4v8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:28 crc kubenswrapper[4832]: I1204 06:44:28.445242 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:44:29 crc kubenswrapper[4832]: I1204 06:44:29.026048 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2"] Dec 04 06:44:29 crc kubenswrapper[4832]: I1204 06:44:29.977882 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" event={"ID":"9e2d5cc2-a6c8-4953-8c34-650c047c5848","Type":"ContainerStarted","Data":"d8ab2239257f051822d63efce4de5d61183b284bdb7515f189679b572abcd0dc"} Dec 04 06:44:29 crc kubenswrapper[4832]: I1204 06:44:29.978682 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" event={"ID":"9e2d5cc2-a6c8-4953-8c34-650c047c5848","Type":"ContainerStarted","Data":"dac13b296836a6ff136c9ebb971f6f3d089ba46067221682a7b0f06561c71f86"} Dec 04 06:44:30 crc kubenswrapper[4832]: I1204 06:44:30.000501 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" podStartSLOduration=1.818814244 podStartE2EDuration="2.00047828s" podCreationTimestamp="2025-12-04 06:44:28 +0000 UTC" firstStartedPulling="2025-12-04 06:44:29.030800584 +0000 UTC m=+2124.643618290" lastFinishedPulling="2025-12-04 06:44:29.21246462 +0000 UTC m=+2124.825282326" observedRunningTime="2025-12-04 06:44:29.998849871 +0000 UTC m=+2125.611667587" watchObservedRunningTime="2025-12-04 06:44:30.00047828 +0000 UTC m=+2125.613295986" Dec 04 06:44:35 crc kubenswrapper[4832]: I1204 06:44:35.363018 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:44:35 crc kubenswrapper[4832]: I1204 06:44:35.363583 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.165758 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx"] Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.171099 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.174166 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.175470 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.177798 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx"] Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.247742 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-secret-volume\") pod \"collect-profiles-29413845-dfjnx\" (UID: \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.247869 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lz2\" (UniqueName: \"kubernetes.io/projected/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-kube-api-access-w6lz2\") pod \"collect-profiles-29413845-dfjnx\" (UID: \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.248178 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-config-volume\") pod \"collect-profiles-29413845-dfjnx\" (UID: \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.350792 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-secret-volume\") pod \"collect-profiles-29413845-dfjnx\" (UID: \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.350940 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6lz2\" (UniqueName: \"kubernetes.io/projected/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-kube-api-access-w6lz2\") pod \"collect-profiles-29413845-dfjnx\" (UID: \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.351791 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-config-volume\") pod \"collect-profiles-29413845-dfjnx\" (UID: \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.352945 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-config-volume\") pod \"collect-profiles-29413845-dfjnx\" (UID: \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.373451 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-secret-volume\") pod \"collect-profiles-29413845-dfjnx\" (UID: \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.404548 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6lz2\" (UniqueName: \"kubernetes.io/projected/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-kube-api-access-w6lz2\") pod \"collect-profiles-29413845-dfjnx\" (UID: \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.506731 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:00 crc kubenswrapper[4832]: I1204 06:45:00.989728 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx"] Dec 04 06:45:01 crc kubenswrapper[4832]: I1204 06:45:01.281980 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" event={"ID":"5290ce63-ed70-43b7-90f4-6ea3022ab3a6","Type":"ContainerStarted","Data":"a20bb0cbb04b80005f21387d8db5e0bb675554960961b70e1ad253413bb9f6c8"} Dec 04 06:45:01 crc kubenswrapper[4832]: I1204 06:45:01.282508 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" event={"ID":"5290ce63-ed70-43b7-90f4-6ea3022ab3a6","Type":"ContainerStarted","Data":"9d3dcd0b914696315fb7897ed862ecc7f8c7aa70f08a661da6a7e8cd23950ec8"} Dec 04 06:45:01 crc kubenswrapper[4832]: I1204 06:45:01.313138 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" podStartSLOduration=1.3131131169999999 podStartE2EDuration="1.313113117s" podCreationTimestamp="2025-12-04 06:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 06:45:01.307735306 +0000 UTC m=+2156.920553022" watchObservedRunningTime="2025-12-04 06:45:01.313113117 +0000 UTC m=+2156.925930823" Dec 04 06:45:02 crc kubenswrapper[4832]: I1204 06:45:02.293934 4832 generic.go:334] "Generic (PLEG): container finished" podID="5290ce63-ed70-43b7-90f4-6ea3022ab3a6" containerID="a20bb0cbb04b80005f21387d8db5e0bb675554960961b70e1ad253413bb9f6c8" exitCode=0 Dec 04 06:45:02 crc kubenswrapper[4832]: I1204 06:45:02.294014 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" event={"ID":"5290ce63-ed70-43b7-90f4-6ea3022ab3a6","Type":"ContainerDied","Data":"a20bb0cbb04b80005f21387d8db5e0bb675554960961b70e1ad253413bb9f6c8"} Dec 04 06:45:03 crc kubenswrapper[4832]: I1204 06:45:03.716033 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:03 crc kubenswrapper[4832]: I1204 06:45:03.830043 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6lz2\" (UniqueName: \"kubernetes.io/projected/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-kube-api-access-w6lz2\") pod \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\" (UID: \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\") " Dec 04 06:45:03 crc kubenswrapper[4832]: I1204 06:45:03.830694 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-secret-volume\") pod \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\" (UID: \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\") " Dec 04 06:45:03 crc kubenswrapper[4832]: I1204 06:45:03.830723 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-config-volume\") pod \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\" (UID: \"5290ce63-ed70-43b7-90f4-6ea3022ab3a6\") " Dec 04 06:45:03 crc kubenswrapper[4832]: I1204 06:45:03.833000 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "5290ce63-ed70-43b7-90f4-6ea3022ab3a6" (UID: "5290ce63-ed70-43b7-90f4-6ea3022ab3a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:45:03 crc kubenswrapper[4832]: I1204 06:45:03.838896 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5290ce63-ed70-43b7-90f4-6ea3022ab3a6" (UID: "5290ce63-ed70-43b7-90f4-6ea3022ab3a6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:45:03 crc kubenswrapper[4832]: I1204 06:45:03.841943 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-kube-api-access-w6lz2" (OuterVolumeSpecName: "kube-api-access-w6lz2") pod "5290ce63-ed70-43b7-90f4-6ea3022ab3a6" (UID: "5290ce63-ed70-43b7-90f4-6ea3022ab3a6"). InnerVolumeSpecName "kube-api-access-w6lz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:45:03 crc kubenswrapper[4832]: I1204 06:45:03.932501 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6lz2\" (UniqueName: \"kubernetes.io/projected/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-kube-api-access-w6lz2\") on node \"crc\" DevicePath \"\"" Dec 04 06:45:03 crc kubenswrapper[4832]: I1204 06:45:03.932539 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 06:45:03 crc kubenswrapper[4832]: I1204 06:45:03.932550 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5290ce63-ed70-43b7-90f4-6ea3022ab3a6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 06:45:04 crc kubenswrapper[4832]: I1204 06:45:04.314261 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" event={"ID":"5290ce63-ed70-43b7-90f4-6ea3022ab3a6","Type":"ContainerDied","Data":"9d3dcd0b914696315fb7897ed862ecc7f8c7aa70f08a661da6a7e8cd23950ec8"} Dec 04 06:45:04 crc kubenswrapper[4832]: I1204 06:45:04.314335 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d3dcd0b914696315fb7897ed862ecc7f8c7aa70f08a661da6a7e8cd23950ec8" Dec 04 06:45:04 crc kubenswrapper[4832]: I1204 06:45:04.314360 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413845-dfjnx" Dec 04 06:45:04 crc kubenswrapper[4832]: I1204 06:45:04.398291 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85"] Dec 04 06:45:04 crc kubenswrapper[4832]: I1204 06:45:04.409464 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413800-89r85"] Dec 04 06:45:04 crc kubenswrapper[4832]: I1204 06:45:04.728819 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc09cb39-1b31-47c6-88c7-8c15d31c4960" path="/var/lib/kubelet/pods/bc09cb39-1b31-47c6-88c7-8c15d31c4960/volumes" Dec 04 06:45:05 crc kubenswrapper[4832]: I1204 06:45:05.362893 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:45:05 crc kubenswrapper[4832]: I1204 06:45:05.363613 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:45:05 crc kubenswrapper[4832]: I1204 06:45:05.363804 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:45:05 crc kubenswrapper[4832]: I1204 06:45:05.365813 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 06:45:05 crc kubenswrapper[4832]: I1204 06:45:05.365995 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" gracePeriod=600 Dec 04 06:45:05 crc kubenswrapper[4832]: E1204 06:45:05.503708 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:45:06 crc kubenswrapper[4832]: I1204 06:45:06.337460 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" exitCode=0 Dec 04 06:45:06 crc kubenswrapper[4832]: I1204 06:45:06.337561 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db"} Dec 04 06:45:06 crc kubenswrapper[4832]: I1204 06:45:06.337950 4832 scope.go:117] "RemoveContainer" containerID="8f984311e54227f0b4d82b40815aa71ea1d1ea9bcddd7d057924cdb99fbf0789" Dec 04 06:45:06 crc kubenswrapper[4832]: I1204 06:45:06.338849 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:45:06 crc kubenswrapper[4832]: E1204 06:45:06.339286 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:45:19 crc kubenswrapper[4832]: I1204 06:45:19.710775 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:45:19 crc kubenswrapper[4832]: E1204 06:45:19.711868 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:45:21 crc kubenswrapper[4832]: I1204 06:45:21.363138 4832 scope.go:117] "RemoveContainer" containerID="0ea88904d6df2f24d9fbb56f9e2eb00fdb9cb068d94cde01ae7017ecadf11549" Dec 04 06:45:30 crc kubenswrapper[4832]: I1204 06:45:30.711450 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:45:30 crc kubenswrapper[4832]: E1204 06:45:30.712637 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:45:42 crc kubenswrapper[4832]: I1204 06:45:42.711254 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:45:42 crc kubenswrapper[4832]: E1204 06:45:42.712116 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:45:56 crc kubenswrapper[4832]: I1204 06:45:56.712046 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:45:56 crc kubenswrapper[4832]: E1204 06:45:56.713425 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:46:09 crc kubenswrapper[4832]: I1204 06:46:09.711828 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:46:09 crc kubenswrapper[4832]: E1204 06:46:09.712763 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:46:24 crc kubenswrapper[4832]: I1204 06:46:24.721075 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:46:24 crc kubenswrapper[4832]: E1204 06:46:24.722356 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:46:35 crc kubenswrapper[4832]: I1204 06:46:35.711071 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:46:35 crc kubenswrapper[4832]: E1204 06:46:35.712004 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:46:47 crc kubenswrapper[4832]: I1204 06:46:47.711809 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:46:47 crc kubenswrapper[4832]: E1204 06:46:47.713248 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.514745 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h8l7k"] Dec 04 06:46:50 crc kubenswrapper[4832]: E1204 06:46:50.517272 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5290ce63-ed70-43b7-90f4-6ea3022ab3a6" containerName="collect-profiles" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.517296 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5290ce63-ed70-43b7-90f4-6ea3022ab3a6" containerName="collect-profiles" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.517684 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5290ce63-ed70-43b7-90f4-6ea3022ab3a6" containerName="collect-profiles" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.519430 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.582384 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8l7k"] Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.598870 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7cdf738-cfdd-404f-a3f7-387291dce6e1-catalog-content\") pod \"community-operators-h8l7k\" (UID: \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\") " pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.598952 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7cdf738-cfdd-404f-a3f7-387291dce6e1-utilities\") pod \"community-operators-h8l7k\" (UID: \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\") " pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.599005 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qzk5\" (UniqueName: \"kubernetes.io/projected/f7cdf738-cfdd-404f-a3f7-387291dce6e1-kube-api-access-5qzk5\") pod \"community-operators-h8l7k\" (UID: \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\") " pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.701263 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7cdf738-cfdd-404f-a3f7-387291dce6e1-catalog-content\") pod \"community-operators-h8l7k\" (UID: \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\") " pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.701319 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7cdf738-cfdd-404f-a3f7-387291dce6e1-utilities\") pod \"community-operators-h8l7k\" (UID: \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\") " pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.701348 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qzk5\" (UniqueName: \"kubernetes.io/projected/f7cdf738-cfdd-404f-a3f7-387291dce6e1-kube-api-access-5qzk5\") pod \"community-operators-h8l7k\" (UID: \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\") " pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.702094 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7cdf738-cfdd-404f-a3f7-387291dce6e1-catalog-content\") pod \"community-operators-h8l7k\" (UID: \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\") " pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.702187 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7cdf738-cfdd-404f-a3f7-387291dce6e1-utilities\") pod \"community-operators-h8l7k\" (UID: \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\") " pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.723652 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qzk5\" (UniqueName: \"kubernetes.io/projected/f7cdf738-cfdd-404f-a3f7-387291dce6e1-kube-api-access-5qzk5\") pod \"community-operators-h8l7k\" (UID: \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\") " pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:46:50 crc kubenswrapper[4832]: I1204 06:46:50.860917 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:46:51 crc kubenswrapper[4832]: I1204 06:46:51.505292 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8l7k"] Dec 04 06:46:51 crc kubenswrapper[4832]: I1204 06:46:51.570438 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8l7k" event={"ID":"f7cdf738-cfdd-404f-a3f7-387291dce6e1","Type":"ContainerStarted","Data":"2c7aa304e8c037320989823b6e33cac671347d282b5b2b013973474c5f9c1040"} Dec 04 06:46:52 crc kubenswrapper[4832]: I1204 06:46:52.587701 4832 generic.go:334] "Generic (PLEG): container finished" podID="f7cdf738-cfdd-404f-a3f7-387291dce6e1" containerID="9aea130d19779fa25dc5c8d43b4c138230ae8c27f2721545f14446eb73a9ba7b" exitCode=0 Dec 04 06:46:52 crc kubenswrapper[4832]: I1204 06:46:52.587775 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8l7k" event={"ID":"f7cdf738-cfdd-404f-a3f7-387291dce6e1","Type":"ContainerDied","Data":"9aea130d19779fa25dc5c8d43b4c138230ae8c27f2721545f14446eb73a9ba7b"} Dec 04 06:46:53 crc kubenswrapper[4832]: I1204 06:46:53.605869 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8l7k" event={"ID":"f7cdf738-cfdd-404f-a3f7-387291dce6e1","Type":"ContainerStarted","Data":"06b937ae2b1db1d747f1bb903de09c649d20c75dae9475b4cbc6ef0f93fcffef"} Dec 04 06:46:54 crc kubenswrapper[4832]: I1204 06:46:54.618663 4832 generic.go:334] "Generic (PLEG): container finished" podID="f7cdf738-cfdd-404f-a3f7-387291dce6e1" containerID="06b937ae2b1db1d747f1bb903de09c649d20c75dae9475b4cbc6ef0f93fcffef" exitCode=0 Dec 04 06:46:54 crc kubenswrapper[4832]: I1204 06:46:54.618705 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8l7k" event={"ID":"f7cdf738-cfdd-404f-a3f7-387291dce6e1","Type":"ContainerDied","Data":"06b937ae2b1db1d747f1bb903de09c649d20c75dae9475b4cbc6ef0f93fcffef"} Dec 04 06:46:54 crc kubenswrapper[4832]: I1204 06:46:54.621444 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 06:46:55 crc kubenswrapper[4832]: I1204 06:46:55.632518 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8l7k" event={"ID":"f7cdf738-cfdd-404f-a3f7-387291dce6e1","Type":"ContainerStarted","Data":"cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511"} Dec 04 06:46:55 crc kubenswrapper[4832]: I1204 06:46:55.661830 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h8l7k" podStartSLOduration=3.203281193 podStartE2EDuration="5.661801125s" podCreationTimestamp="2025-12-04 06:46:50 +0000 UTC" firstStartedPulling="2025-12-04 06:46:52.591694025 +0000 UTC m=+2268.204511731" lastFinishedPulling="2025-12-04 06:46:55.050213957 +0000 UTC m=+2270.663031663" observedRunningTime="2025-12-04 06:46:55.653815791 +0000 UTC m=+2271.266633517" watchObservedRunningTime="2025-12-04 06:46:55.661801125 +0000 UTC m=+2271.274618831" Dec 04 06:47:00 crc kubenswrapper[4832]: I1204 06:47:00.861051 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:47:00 crc kubenswrapper[4832]: I1204 06:47:00.861792 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:47:00 crc kubenswrapper[4832]: I1204 06:47:00.923609 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:47:01 crc kubenswrapper[4832]: I1204 06:47:01.710645 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:47:01 crc kubenswrapper[4832]: E1204 06:47:01.710871 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:47:01 crc kubenswrapper[4832]: I1204 06:47:01.750978 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:47:01 crc kubenswrapper[4832]: I1204 06:47:01.823852 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h8l7k"] Dec 04 06:47:03 crc kubenswrapper[4832]: I1204 06:47:03.723324 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h8l7k" podUID="f7cdf738-cfdd-404f-a3f7-387291dce6e1" containerName="registry-server" containerID="cri-o://cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511" gracePeriod=2 Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.680258 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.755295 4832 generic.go:334] "Generic (PLEG): container finished" podID="f7cdf738-cfdd-404f-a3f7-387291dce6e1" containerID="cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511" exitCode=0 Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.755499 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8l7k" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.759440 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8l7k" event={"ID":"f7cdf738-cfdd-404f-a3f7-387291dce6e1","Type":"ContainerDied","Data":"cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511"} Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.759525 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8l7k" event={"ID":"f7cdf738-cfdd-404f-a3f7-387291dce6e1","Type":"ContainerDied","Data":"2c7aa304e8c037320989823b6e33cac671347d282b5b2b013973474c5f9c1040"} Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.759559 4832 scope.go:117] "RemoveContainer" containerID="cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.761132 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7cdf738-cfdd-404f-a3f7-387291dce6e1-catalog-content\") pod \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\" (UID: \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\") " Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.761590 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qzk5\" (UniqueName: \"kubernetes.io/projected/f7cdf738-cfdd-404f-a3f7-387291dce6e1-kube-api-access-5qzk5\") pod \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\" (UID: \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\") " Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.761953 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7cdf738-cfdd-404f-a3f7-387291dce6e1-utilities\") pod \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\" (UID: \"f7cdf738-cfdd-404f-a3f7-387291dce6e1\") " Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.763316 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7cdf738-cfdd-404f-a3f7-387291dce6e1-utilities" (OuterVolumeSpecName: "utilities") pod "f7cdf738-cfdd-404f-a3f7-387291dce6e1" (UID: "f7cdf738-cfdd-404f-a3f7-387291dce6e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.763982 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7cdf738-cfdd-404f-a3f7-387291dce6e1-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.772610 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7cdf738-cfdd-404f-a3f7-387291dce6e1-kube-api-access-5qzk5" (OuterVolumeSpecName: "kube-api-access-5qzk5") pod "f7cdf738-cfdd-404f-a3f7-387291dce6e1" (UID: "f7cdf738-cfdd-404f-a3f7-387291dce6e1"). InnerVolumeSpecName "kube-api-access-5qzk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.817016 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7cdf738-cfdd-404f-a3f7-387291dce6e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7cdf738-cfdd-404f-a3f7-387291dce6e1" (UID: "f7cdf738-cfdd-404f-a3f7-387291dce6e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.828065 4832 scope.go:117] "RemoveContainer" containerID="06b937ae2b1db1d747f1bb903de09c649d20c75dae9475b4cbc6ef0f93fcffef" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.850046 4832 scope.go:117] "RemoveContainer" containerID="9aea130d19779fa25dc5c8d43b4c138230ae8c27f2721545f14446eb73a9ba7b" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.867786 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qzk5\" (UniqueName: \"kubernetes.io/projected/f7cdf738-cfdd-404f-a3f7-387291dce6e1-kube-api-access-5qzk5\") on node \"crc\" DevicePath \"\"" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.867837 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7cdf738-cfdd-404f-a3f7-387291dce6e1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.896157 4832 scope.go:117] "RemoveContainer" containerID="cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511" Dec 04 06:47:04 crc kubenswrapper[4832]: E1204 06:47:04.896625 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511\": container with ID starting with cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511 not found: ID does not exist" containerID="cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.896674 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511"} err="failed to get container status \"cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511\": rpc error: code = NotFound desc = could not find container \"cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511\": container with ID starting with cb86bdeeea2eecc4ee8f2a39c94e51f699ea3a36d86126053b5b92b575742511 not found: ID does not exist" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.896705 4832 scope.go:117] "RemoveContainer" containerID="06b937ae2b1db1d747f1bb903de09c649d20c75dae9475b4cbc6ef0f93fcffef" Dec 04 06:47:04 crc kubenswrapper[4832]: E1204 06:47:04.897236 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06b937ae2b1db1d747f1bb903de09c649d20c75dae9475b4cbc6ef0f93fcffef\": container with ID starting with 06b937ae2b1db1d747f1bb903de09c649d20c75dae9475b4cbc6ef0f93fcffef not found: ID does not exist" containerID="06b937ae2b1db1d747f1bb903de09c649d20c75dae9475b4cbc6ef0f93fcffef" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.897266 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b937ae2b1db1d747f1bb903de09c649d20c75dae9475b4cbc6ef0f93fcffef"} err="failed to get container status \"06b937ae2b1db1d747f1bb903de09c649d20c75dae9475b4cbc6ef0f93fcffef\": rpc error: code = NotFound desc = could not find container \"06b937ae2b1db1d747f1bb903de09c649d20c75dae9475b4cbc6ef0f93fcffef\": container with ID starting with 06b937ae2b1db1d747f1bb903de09c649d20c75dae9475b4cbc6ef0f93fcffef not found: ID does not exist" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.897287 4832 scope.go:117] "RemoveContainer" containerID="9aea130d19779fa25dc5c8d43b4c138230ae8c27f2721545f14446eb73a9ba7b" Dec 04 06:47:04 crc kubenswrapper[4832]: E1204 06:47:04.897644 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aea130d19779fa25dc5c8d43b4c138230ae8c27f2721545f14446eb73a9ba7b\": container with ID starting with 9aea130d19779fa25dc5c8d43b4c138230ae8c27f2721545f14446eb73a9ba7b not found: ID does not exist" containerID="9aea130d19779fa25dc5c8d43b4c138230ae8c27f2721545f14446eb73a9ba7b" Dec 04 06:47:04 crc kubenswrapper[4832]: I1204 06:47:04.897673 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aea130d19779fa25dc5c8d43b4c138230ae8c27f2721545f14446eb73a9ba7b"} err="failed to get container status \"9aea130d19779fa25dc5c8d43b4c138230ae8c27f2721545f14446eb73a9ba7b\": rpc error: code = NotFound desc = could not find container \"9aea130d19779fa25dc5c8d43b4c138230ae8c27f2721545f14446eb73a9ba7b\": container with ID starting with 9aea130d19779fa25dc5c8d43b4c138230ae8c27f2721545f14446eb73a9ba7b not found: ID does not exist" Dec 04 06:47:05 crc kubenswrapper[4832]: I1204 06:47:05.104503 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h8l7k"] Dec 04 06:47:05 crc kubenswrapper[4832]: I1204 06:47:05.112453 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h8l7k"] Dec 04 06:47:06 crc kubenswrapper[4832]: I1204 06:47:06.732235 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7cdf738-cfdd-404f-a3f7-387291dce6e1" path="/var/lib/kubelet/pods/f7cdf738-cfdd-404f-a3f7-387291dce6e1/volumes" Dec 04 06:47:14 crc kubenswrapper[4832]: I1204 06:47:14.719947 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:47:14 crc kubenswrapper[4832]: E1204 06:47:14.720840 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:47:27 crc kubenswrapper[4832]: I1204 06:47:27.710568 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:47:27 crc kubenswrapper[4832]: E1204 06:47:27.712052 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:47:42 crc kubenswrapper[4832]: I1204 06:47:42.711716 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:47:42 crc kubenswrapper[4832]: E1204 06:47:42.712904 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:47:53 crc kubenswrapper[4832]: I1204 06:47:53.710978 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:47:53 crc kubenswrapper[4832]: E1204 06:47:53.712206 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:48:04 crc kubenswrapper[4832]: I1204 06:48:04.718899 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:48:04 crc kubenswrapper[4832]: E1204 06:48:04.719917 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:48:08 crc kubenswrapper[4832]: I1204 06:48:08.945279 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tlsmv"] Dec 04 06:48:08 crc kubenswrapper[4832]: E1204 06:48:08.946979 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cdf738-cfdd-404f-a3f7-387291dce6e1" containerName="registry-server" Dec 04 06:48:08 crc kubenswrapper[4832]: I1204 06:48:08.946998 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cdf738-cfdd-404f-a3f7-387291dce6e1" containerName="registry-server" Dec 04 06:48:08 crc kubenswrapper[4832]: E1204 06:48:08.947055 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cdf738-cfdd-404f-a3f7-387291dce6e1" containerName="extract-utilities" Dec 04 06:48:08 crc kubenswrapper[4832]: I1204 06:48:08.947064 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cdf738-cfdd-404f-a3f7-387291dce6e1" containerName="extract-utilities" Dec 04 06:48:08 crc kubenswrapper[4832]: E1204 06:48:08.947080 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cdf738-cfdd-404f-a3f7-387291dce6e1" containerName="extract-content" Dec 04 06:48:08 crc kubenswrapper[4832]: I1204 06:48:08.947089 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cdf738-cfdd-404f-a3f7-387291dce6e1" containerName="extract-content" Dec 04 06:48:08 crc kubenswrapper[4832]: I1204 06:48:08.947327 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7cdf738-cfdd-404f-a3f7-387291dce6e1" containerName="registry-server" Dec 04 06:48:08 crc kubenswrapper[4832]: I1204 06:48:08.949492 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:08 crc kubenswrapper[4832]: I1204 06:48:08.972581 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlsmv"] Dec 04 06:48:09 crc kubenswrapper[4832]: I1204 06:48:09.069089 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zghnw\" (UniqueName: \"kubernetes.io/projected/dbb79939-f8cc-4755-b765-d7ca424c910e-kube-api-access-zghnw\") pod \"certified-operators-tlsmv\" (UID: \"dbb79939-f8cc-4755-b765-d7ca424c910e\") " pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:09 crc kubenswrapper[4832]: I1204 06:48:09.069205 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb79939-f8cc-4755-b765-d7ca424c910e-catalog-content\") pod \"certified-operators-tlsmv\" (UID: \"dbb79939-f8cc-4755-b765-d7ca424c910e\") " pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:09 crc kubenswrapper[4832]: I1204 06:48:09.069234 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb79939-f8cc-4755-b765-d7ca424c910e-utilities\") pod \"certified-operators-tlsmv\" (UID: \"dbb79939-f8cc-4755-b765-d7ca424c910e\") " pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:09 crc kubenswrapper[4832]: I1204 06:48:09.171741 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zghnw\" (UniqueName: \"kubernetes.io/projected/dbb79939-f8cc-4755-b765-d7ca424c910e-kube-api-access-zghnw\") pod \"certified-operators-tlsmv\" (UID: \"dbb79939-f8cc-4755-b765-d7ca424c910e\") " pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:09 crc kubenswrapper[4832]: I1204 06:48:09.171913 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb79939-f8cc-4755-b765-d7ca424c910e-catalog-content\") pod \"certified-operators-tlsmv\" (UID: \"dbb79939-f8cc-4755-b765-d7ca424c910e\") " pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:09 crc kubenswrapper[4832]: I1204 06:48:09.171952 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb79939-f8cc-4755-b765-d7ca424c910e-utilities\") pod \"certified-operators-tlsmv\" (UID: \"dbb79939-f8cc-4755-b765-d7ca424c910e\") " pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:09 crc kubenswrapper[4832]: I1204 06:48:09.172593 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb79939-f8cc-4755-b765-d7ca424c910e-catalog-content\") pod \"certified-operators-tlsmv\" (UID: \"dbb79939-f8cc-4755-b765-d7ca424c910e\") " pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:09 crc kubenswrapper[4832]: I1204 06:48:09.172660 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb79939-f8cc-4755-b765-d7ca424c910e-utilities\") pod \"certified-operators-tlsmv\" (UID: \"dbb79939-f8cc-4755-b765-d7ca424c910e\") " pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:09 crc kubenswrapper[4832]: I1204 06:48:09.199598 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zghnw\" (UniqueName: \"kubernetes.io/projected/dbb79939-f8cc-4755-b765-d7ca424c910e-kube-api-access-zghnw\") pod \"certified-operators-tlsmv\" (UID: \"dbb79939-f8cc-4755-b765-d7ca424c910e\") " pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:09 crc kubenswrapper[4832]: I1204 06:48:09.296022 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:09 crc kubenswrapper[4832]: I1204 06:48:09.848487 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlsmv"] Dec 04 06:48:10 crc kubenswrapper[4832]: I1204 06:48:10.487040 4832 generic.go:334] "Generic (PLEG): container finished" podID="dbb79939-f8cc-4755-b765-d7ca424c910e" containerID="e661e2b1c7a269370cd40a915fd07cea310eba6d92f8acd569aa8559894c46f9" exitCode=0 Dec 04 06:48:10 crc kubenswrapper[4832]: I1204 06:48:10.487135 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlsmv" event={"ID":"dbb79939-f8cc-4755-b765-d7ca424c910e","Type":"ContainerDied","Data":"e661e2b1c7a269370cd40a915fd07cea310eba6d92f8acd569aa8559894c46f9"} Dec 04 06:48:10 crc kubenswrapper[4832]: I1204 06:48:10.487638 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlsmv" event={"ID":"dbb79939-f8cc-4755-b765-d7ca424c910e","Type":"ContainerStarted","Data":"a82d5b2f7259e530c5baeb3b1788a75e4085a3a9b4473f9558632e6ac1bc270b"} Dec 04 06:48:11 crc kubenswrapper[4832]: I1204 06:48:11.501765 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlsmv" event={"ID":"dbb79939-f8cc-4755-b765-d7ca424c910e","Type":"ContainerStarted","Data":"251b8995a4ab0b4dc4e8270449c31baa075cafc77a79ebba66cd7f7c7259f8fe"} Dec 04 06:48:12 crc kubenswrapper[4832]: I1204 06:48:12.532206 4832 generic.go:334] "Generic (PLEG): container finished" podID="dbb79939-f8cc-4755-b765-d7ca424c910e" containerID="251b8995a4ab0b4dc4e8270449c31baa075cafc77a79ebba66cd7f7c7259f8fe" exitCode=0 Dec 04 06:48:12 crc kubenswrapper[4832]: I1204 06:48:12.532935 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlsmv" event={"ID":"dbb79939-f8cc-4755-b765-d7ca424c910e","Type":"ContainerDied","Data":"251b8995a4ab0b4dc4e8270449c31baa075cafc77a79ebba66cd7f7c7259f8fe"} Dec 04 06:48:13 crc kubenswrapper[4832]: I1204 06:48:13.546884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlsmv" event={"ID":"dbb79939-f8cc-4755-b765-d7ca424c910e","Type":"ContainerStarted","Data":"c29147fafd6b652b08b9b5d89d46d90edc11d2811070ba6a7c05341b9a95ee15"} Dec 04 06:48:13 crc kubenswrapper[4832]: I1204 06:48:13.582198 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tlsmv" podStartSLOduration=2.894538283 podStartE2EDuration="5.582171194s" podCreationTimestamp="2025-12-04 06:48:08 +0000 UTC" firstStartedPulling="2025-12-04 06:48:10.490320464 +0000 UTC m=+2346.103138170" lastFinishedPulling="2025-12-04 06:48:13.177953375 +0000 UTC m=+2348.790771081" observedRunningTime="2025-12-04 06:48:13.566182555 +0000 UTC m=+2349.179000261" watchObservedRunningTime="2025-12-04 06:48:13.582171194 +0000 UTC m=+2349.194988920" Dec 04 06:48:19 crc kubenswrapper[4832]: I1204 06:48:19.297206 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:19 crc kubenswrapper[4832]: I1204 06:48:19.297895 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:19 crc kubenswrapper[4832]: I1204 06:48:19.353177 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:19 crc kubenswrapper[4832]: I1204 06:48:19.620036 4832 generic.go:334] "Generic (PLEG): container finished" podID="9e2d5cc2-a6c8-4953-8c34-650c047c5848" containerID="d8ab2239257f051822d63efce4de5d61183b284bdb7515f189679b572abcd0dc" exitCode=0 Dec 04 06:48:19 crc kubenswrapper[4832]: I1204 06:48:19.620146 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" event={"ID":"9e2d5cc2-a6c8-4953-8c34-650c047c5848","Type":"ContainerDied","Data":"d8ab2239257f051822d63efce4de5d61183b284bdb7515f189679b572abcd0dc"} Dec 04 06:48:19 crc kubenswrapper[4832]: I1204 06:48:19.694414 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:19 crc kubenswrapper[4832]: I1204 06:48:19.710523 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:48:19 crc kubenswrapper[4832]: E1204 06:48:19.710834 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:48:19 crc kubenswrapper[4832]: I1204 06:48:19.753039 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlsmv"] Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.065377 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.164942 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-inventory\") pod \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.165074 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g4v8\" (UniqueName: \"kubernetes.io/projected/9e2d5cc2-a6c8-4953-8c34-650c047c5848-kube-api-access-7g4v8\") pod \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.165361 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-ssh-key\") pod \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.165588 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-libvirt-combined-ca-bundle\") pod \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.165665 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-libvirt-secret-0\") pod \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\" (UID: \"9e2d5cc2-a6c8-4953-8c34-650c047c5848\") " Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.173652 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9e2d5cc2-a6c8-4953-8c34-650c047c5848" (UID: "9e2d5cc2-a6c8-4953-8c34-650c047c5848"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.173697 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e2d5cc2-a6c8-4953-8c34-650c047c5848-kube-api-access-7g4v8" (OuterVolumeSpecName: "kube-api-access-7g4v8") pod "9e2d5cc2-a6c8-4953-8c34-650c047c5848" (UID: "9e2d5cc2-a6c8-4953-8c34-650c047c5848"). InnerVolumeSpecName "kube-api-access-7g4v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.201037 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "9e2d5cc2-a6c8-4953-8c34-650c047c5848" (UID: "9e2d5cc2-a6c8-4953-8c34-650c047c5848"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.203886 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9e2d5cc2-a6c8-4953-8c34-650c047c5848" (UID: "9e2d5cc2-a6c8-4953-8c34-650c047c5848"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.205897 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-inventory" (OuterVolumeSpecName: "inventory") pod "9e2d5cc2-a6c8-4953-8c34-650c047c5848" (UID: "9e2d5cc2-a6c8-4953-8c34-650c047c5848"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.268575 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.269063 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.269086 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g4v8\" (UniqueName: \"kubernetes.io/projected/9e2d5cc2-a6c8-4953-8c34-650c047c5848-kube-api-access-7g4v8\") on node \"crc\" DevicePath \"\"" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.269109 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.269130 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e2d5cc2-a6c8-4953-8c34-650c047c5848-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.645101 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.645103 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2" event={"ID":"9e2d5cc2-a6c8-4953-8c34-650c047c5848","Type":"ContainerDied","Data":"dac13b296836a6ff136c9ebb971f6f3d089ba46067221682a7b0f06561c71f86"} Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.645179 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dac13b296836a6ff136c9ebb971f6f3d089ba46067221682a7b0f06561c71f86" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.645211 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tlsmv" podUID="dbb79939-f8cc-4755-b765-d7ca424c910e" containerName="registry-server" containerID="cri-o://c29147fafd6b652b08b9b5d89d46d90edc11d2811070ba6a7c05341b9a95ee15" gracePeriod=2 Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.789800 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz"] Dec 04 06:48:21 crc kubenswrapper[4832]: E1204 06:48:21.790365 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e2d5cc2-a6c8-4953-8c34-650c047c5848" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.790412 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e2d5cc2-a6c8-4953-8c34-650c047c5848" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.790707 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e2d5cc2-a6c8-4953-8c34-650c047c5848" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.791649 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.794713 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.795307 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.795528 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.795719 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.796028 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.796198 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.799088 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.810275 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz"] Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.884043 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.884104 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.884132 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dcd764c1-4caa-4556-a755-3237f104b88e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.884179 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.884232 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.884253 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.884282 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.884321 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.884344 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nb6v\" (UniqueName: \"kubernetes.io/projected/dcd764c1-4caa-4556-a755-3237f104b88e-kube-api-access-9nb6v\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.986156 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.986347 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.986448 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.986530 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.986637 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.986702 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nb6v\" (UniqueName: \"kubernetes.io/projected/dcd764c1-4caa-4556-a755-3237f104b88e-kube-api-access-9nb6v\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.986858 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.986945 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.987018 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dcd764c1-4caa-4556-a755-3237f104b88e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.989318 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dcd764c1-4caa-4556-a755-3237f104b88e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.992315 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.992155 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.992633 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.993129 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.993441 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.995001 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:21 crc kubenswrapper[4832]: I1204 06:48:21.995848 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:22 crc kubenswrapper[4832]: I1204 06:48:22.013606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nb6v\" (UniqueName: \"kubernetes.io/projected/dcd764c1-4caa-4556-a755-3237f104b88e-kube-api-access-9nb6v\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gklbz\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:22 crc kubenswrapper[4832]: I1204 06:48:22.137826 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:48:22 crc kubenswrapper[4832]: I1204 06:48:22.658845 4832 generic.go:334] "Generic (PLEG): container finished" podID="dbb79939-f8cc-4755-b765-d7ca424c910e" containerID="c29147fafd6b652b08b9b5d89d46d90edc11d2811070ba6a7c05341b9a95ee15" exitCode=0 Dec 04 06:48:22 crc kubenswrapper[4832]: I1204 06:48:22.658942 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlsmv" event={"ID":"dbb79939-f8cc-4755-b765-d7ca424c910e","Type":"ContainerDied","Data":"c29147fafd6b652b08b9b5d89d46d90edc11d2811070ba6a7c05341b9a95ee15"} Dec 04 06:48:22 crc kubenswrapper[4832]: I1204 06:48:22.709251 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz"] Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.338545 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.430384 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb79939-f8cc-4755-b765-d7ca424c910e-catalog-content\") pod \"dbb79939-f8cc-4755-b765-d7ca424c910e\" (UID: \"dbb79939-f8cc-4755-b765-d7ca424c910e\") " Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.430478 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zghnw\" (UniqueName: \"kubernetes.io/projected/dbb79939-f8cc-4755-b765-d7ca424c910e-kube-api-access-zghnw\") pod \"dbb79939-f8cc-4755-b765-d7ca424c910e\" (UID: \"dbb79939-f8cc-4755-b765-d7ca424c910e\") " Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.430766 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb79939-f8cc-4755-b765-d7ca424c910e-utilities\") pod \"dbb79939-f8cc-4755-b765-d7ca424c910e\" (UID: \"dbb79939-f8cc-4755-b765-d7ca424c910e\") " Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.431949 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb79939-f8cc-4755-b765-d7ca424c910e-utilities" (OuterVolumeSpecName: "utilities") pod "dbb79939-f8cc-4755-b765-d7ca424c910e" (UID: "dbb79939-f8cc-4755-b765-d7ca424c910e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.437304 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb79939-f8cc-4755-b765-d7ca424c910e-kube-api-access-zghnw" (OuterVolumeSpecName: "kube-api-access-zghnw") pod "dbb79939-f8cc-4755-b765-d7ca424c910e" (UID: "dbb79939-f8cc-4755-b765-d7ca424c910e"). InnerVolumeSpecName "kube-api-access-zghnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.490467 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb79939-f8cc-4755-b765-d7ca424c910e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbb79939-f8cc-4755-b765-d7ca424c910e" (UID: "dbb79939-f8cc-4755-b765-d7ca424c910e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.533164 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb79939-f8cc-4755-b765-d7ca424c910e-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.533200 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb79939-f8cc-4755-b765-d7ca424c910e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.533212 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zghnw\" (UniqueName: \"kubernetes.io/projected/dbb79939-f8cc-4755-b765-d7ca424c910e-kube-api-access-zghnw\") on node \"crc\" DevicePath \"\"" Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.671824 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlsmv" event={"ID":"dbb79939-f8cc-4755-b765-d7ca424c910e","Type":"ContainerDied","Data":"a82d5b2f7259e530c5baeb3b1788a75e4085a3a9b4473f9558632e6ac1bc270b"} Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.671852 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlsmv" Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.671891 4832 scope.go:117] "RemoveContainer" containerID="c29147fafd6b652b08b9b5d89d46d90edc11d2811070ba6a7c05341b9a95ee15" Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.673657 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" event={"ID":"dcd764c1-4caa-4556-a755-3237f104b88e","Type":"ContainerStarted","Data":"0e11c3ddff99dadcc8d9a05efd05a4ac37fa9469a0359bf7ebe1b9e243be188d"} Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.673704 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" event={"ID":"dcd764c1-4caa-4556-a755-3237f104b88e","Type":"ContainerStarted","Data":"52e19f778c5ef69a9c4a0ad3956235cd7dde5165e7fbb311719fcb9476392216"} Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.698093 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" podStartSLOduration=2.451590393 podStartE2EDuration="2.698069836s" podCreationTimestamp="2025-12-04 06:48:21 +0000 UTC" firstStartedPulling="2025-12-04 06:48:22.719368372 +0000 UTC m=+2358.332186078" lastFinishedPulling="2025-12-04 06:48:22.965847815 +0000 UTC m=+2358.578665521" observedRunningTime="2025-12-04 06:48:23.693898494 +0000 UTC m=+2359.306716200" watchObservedRunningTime="2025-12-04 06:48:23.698069836 +0000 UTC m=+2359.310887542" Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.713671 4832 scope.go:117] "RemoveContainer" containerID="251b8995a4ab0b4dc4e8270449c31baa075cafc77a79ebba66cd7f7c7259f8fe" Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.721783 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlsmv"] Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.735667 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tlsmv"] Dec 04 06:48:23 crc kubenswrapper[4832]: I1204 06:48:23.754617 4832 scope.go:117] "RemoveContainer" containerID="e661e2b1c7a269370cd40a915fd07cea310eba6d92f8acd569aa8559894c46f9" Dec 04 06:48:24 crc kubenswrapper[4832]: I1204 06:48:24.725198 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb79939-f8cc-4755-b765-d7ca424c910e" path="/var/lib/kubelet/pods/dbb79939-f8cc-4755-b765-d7ca424c910e/volumes" Dec 04 06:48:32 crc kubenswrapper[4832]: I1204 06:48:32.711419 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:48:32 crc kubenswrapper[4832]: E1204 06:48:32.712220 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:48:46 crc kubenswrapper[4832]: I1204 06:48:46.711885 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:48:46 crc kubenswrapper[4832]: E1204 06:48:46.713248 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:48:58 crc kubenswrapper[4832]: I1204 06:48:58.713078 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:48:58 crc kubenswrapper[4832]: E1204 06:48:58.714448 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:49:11 crc kubenswrapper[4832]: I1204 06:49:11.711339 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:49:11 crc kubenswrapper[4832]: E1204 06:49:11.712606 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:49:24 crc kubenswrapper[4832]: I1204 06:49:24.719656 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:49:24 crc kubenswrapper[4832]: E1204 06:49:24.720945 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:49:35 crc kubenswrapper[4832]: I1204 06:49:35.711281 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:49:35 crc kubenswrapper[4832]: E1204 06:49:35.712460 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:49:46 crc kubenswrapper[4832]: I1204 06:49:46.711311 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:49:46 crc kubenswrapper[4832]: E1204 06:49:46.712324 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:50:01 crc kubenswrapper[4832]: I1204 06:50:01.710188 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:50:01 crc kubenswrapper[4832]: E1204 06:50:01.711282 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:50:16 crc kubenswrapper[4832]: I1204 06:50:16.716474 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:50:17 crc kubenswrapper[4832]: I1204 06:50:17.110856 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"ac7730c9b11cdb80159176610c49dc8399f4f04cdcae1508d2787a067aba46cd"} Dec 04 06:51:05 crc kubenswrapper[4832]: I1204 06:51:05.663859 4832 generic.go:334] "Generic (PLEG): container finished" podID="dcd764c1-4caa-4556-a755-3237f104b88e" containerID="0e11c3ddff99dadcc8d9a05efd05a4ac37fa9469a0359bf7ebe1b9e243be188d" exitCode=0 Dec 04 06:51:05 crc kubenswrapper[4832]: I1204 06:51:05.663957 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" event={"ID":"dcd764c1-4caa-4556-a755-3237f104b88e","Type":"ContainerDied","Data":"0e11c3ddff99dadcc8d9a05efd05a4ac37fa9469a0359bf7ebe1b9e243be188d"} Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.150489 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.300192 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-combined-ca-bundle\") pod \"dcd764c1-4caa-4556-a755-3237f104b88e\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.300358 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nb6v\" (UniqueName: \"kubernetes.io/projected/dcd764c1-4caa-4556-a755-3237f104b88e-kube-api-access-9nb6v\") pod \"dcd764c1-4caa-4556-a755-3237f104b88e\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.300427 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-ssh-key\") pod \"dcd764c1-4caa-4556-a755-3237f104b88e\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.300456 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-migration-ssh-key-0\") pod \"dcd764c1-4caa-4556-a755-3237f104b88e\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.300523 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-inventory\") pod \"dcd764c1-4caa-4556-a755-3237f104b88e\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.300656 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dcd764c1-4caa-4556-a755-3237f104b88e-nova-extra-config-0\") pod \"dcd764c1-4caa-4556-a755-3237f104b88e\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.300788 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-cell1-compute-config-0\") pod \"dcd764c1-4caa-4556-a755-3237f104b88e\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.300839 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-migration-ssh-key-1\") pod \"dcd764c1-4caa-4556-a755-3237f104b88e\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.300944 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-cell1-compute-config-1\") pod \"dcd764c1-4caa-4556-a755-3237f104b88e\" (UID: \"dcd764c1-4caa-4556-a755-3237f104b88e\") " Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.308819 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "dcd764c1-4caa-4556-a755-3237f104b88e" (UID: "dcd764c1-4caa-4556-a755-3237f104b88e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.310600 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd764c1-4caa-4556-a755-3237f104b88e-kube-api-access-9nb6v" (OuterVolumeSpecName: "kube-api-access-9nb6v") pod "dcd764c1-4caa-4556-a755-3237f104b88e" (UID: "dcd764c1-4caa-4556-a755-3237f104b88e"). InnerVolumeSpecName "kube-api-access-9nb6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.338426 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-inventory" (OuterVolumeSpecName: "inventory") pod "dcd764c1-4caa-4556-a755-3237f104b88e" (UID: "dcd764c1-4caa-4556-a755-3237f104b88e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.339978 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "dcd764c1-4caa-4556-a755-3237f104b88e" (UID: "dcd764c1-4caa-4556-a755-3237f104b88e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.344111 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "dcd764c1-4caa-4556-a755-3237f104b88e" (UID: "dcd764c1-4caa-4556-a755-3237f104b88e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.345041 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dcd764c1-4caa-4556-a755-3237f104b88e" (UID: "dcd764c1-4caa-4556-a755-3237f104b88e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.350261 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "dcd764c1-4caa-4556-a755-3237f104b88e" (UID: "dcd764c1-4caa-4556-a755-3237f104b88e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.351858 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd764c1-4caa-4556-a755-3237f104b88e-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "dcd764c1-4caa-4556-a755-3237f104b88e" (UID: "dcd764c1-4caa-4556-a755-3237f104b88e"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.352169 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "dcd764c1-4caa-4556-a755-3237f104b88e" (UID: "dcd764c1-4caa-4556-a755-3237f104b88e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.403918 4832 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.403977 4832 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.403986 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nb6v\" (UniqueName: \"kubernetes.io/projected/dcd764c1-4caa-4556-a755-3237f104b88e-kube-api-access-9nb6v\") on node \"crc\" DevicePath \"\"" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.404000 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.404009 4832 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.404019 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.404032 4832 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dcd764c1-4caa-4556-a755-3237f104b88e-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.404041 4832 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.404051 4832 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dcd764c1-4caa-4556-a755-3237f104b88e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.688074 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" event={"ID":"dcd764c1-4caa-4556-a755-3237f104b88e","Type":"ContainerDied","Data":"52e19f778c5ef69a9c4a0ad3956235cd7dde5165e7fbb311719fcb9476392216"} Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.688130 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52e19f778c5ef69a9c4a0ad3956235cd7dde5165e7fbb311719fcb9476392216" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.688187 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gklbz" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.855346 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p"] Dec 04 06:51:07 crc kubenswrapper[4832]: E1204 06:51:07.856101 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb79939-f8cc-4755-b765-d7ca424c910e" containerName="extract-utilities" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.856121 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb79939-f8cc-4755-b765-d7ca424c910e" containerName="extract-utilities" Dec 04 06:51:07 crc kubenswrapper[4832]: E1204 06:51:07.856140 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb79939-f8cc-4755-b765-d7ca424c910e" containerName="registry-server" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.856147 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb79939-f8cc-4755-b765-d7ca424c910e" containerName="registry-server" Dec 04 06:51:07 crc kubenswrapper[4832]: E1204 06:51:07.856169 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd764c1-4caa-4556-a755-3237f104b88e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.856175 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd764c1-4caa-4556-a755-3237f104b88e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 04 06:51:07 crc kubenswrapper[4832]: E1204 06:51:07.856188 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb79939-f8cc-4755-b765-d7ca424c910e" containerName="extract-content" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.856196 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb79939-f8cc-4755-b765-d7ca424c910e" containerName="extract-content" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.856402 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb79939-f8cc-4755-b765-d7ca424c910e" containerName="registry-server" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.856433 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd764c1-4caa-4556-a755-3237f104b88e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.857202 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.862120 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.862280 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mnsf6" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.862370 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.867990 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.868177 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 06:51:07 crc kubenswrapper[4832]: I1204 06:51:07.886653 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p"] Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.022799 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.022921 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.023008 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv4v5\" (UniqueName: \"kubernetes.io/projected/69a026e8-d207-4ccd-86c7-6e646a80529c-kube-api-access-qv4v5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.023029 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.023081 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.023128 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.023181 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.125024 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv4v5\" (UniqueName: \"kubernetes.io/projected/69a026e8-d207-4ccd-86c7-6e646a80529c-kube-api-access-qv4v5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.125093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.125169 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.125224 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.125298 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.125372 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.125457 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.131973 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.132783 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.133508 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.135626 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.136279 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.136356 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.152055 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv4v5\" (UniqueName: \"kubernetes.io/projected/69a026e8-d207-4ccd-86c7-6e646a80529c-kube-api-access-qv4v5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.187031 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:51:08 crc kubenswrapper[4832]: I1204 06:51:08.753959 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p"] Dec 04 06:51:09 crc kubenswrapper[4832]: I1204 06:51:09.717426 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" event={"ID":"69a026e8-d207-4ccd-86c7-6e646a80529c","Type":"ContainerStarted","Data":"907be3bb2555e151203b107ad96c0981c354fc9746710d4e862d006e999a8a3a"} Dec 04 06:51:09 crc kubenswrapper[4832]: I1204 06:51:09.718284 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" event={"ID":"69a026e8-d207-4ccd-86c7-6e646a80529c","Type":"ContainerStarted","Data":"dfcb24d930cbaef18b72e696cd3ded2d4922d156fb7c066ac9d4404b2847cfea"} Dec 04 06:51:09 crc kubenswrapper[4832]: I1204 06:51:09.755214 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" podStartSLOduration=2.564286232 podStartE2EDuration="2.755179651s" podCreationTimestamp="2025-12-04 06:51:07 +0000 UTC" firstStartedPulling="2025-12-04 06:51:08.763722128 +0000 UTC m=+2524.376539874" lastFinishedPulling="2025-12-04 06:51:08.954615587 +0000 UTC m=+2524.567433293" observedRunningTime="2025-12-04 06:51:09.747008941 +0000 UTC m=+2525.359826697" watchObservedRunningTime="2025-12-04 06:51:09.755179651 +0000 UTC m=+2525.367997397" Dec 04 06:52:35 crc kubenswrapper[4832]: I1204 06:52:35.363628 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:52:35 crc kubenswrapper[4832]: I1204 06:52:35.364767 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:53:05 crc kubenswrapper[4832]: I1204 06:53:05.363044 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:53:05 crc kubenswrapper[4832]: I1204 06:53:05.363853 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:53:19 crc kubenswrapper[4832]: I1204 06:53:19.336510 4832 generic.go:334] "Generic (PLEG): container finished" podID="69a026e8-d207-4ccd-86c7-6e646a80529c" containerID="907be3bb2555e151203b107ad96c0981c354fc9746710d4e862d006e999a8a3a" exitCode=0 Dec 04 06:53:19 crc kubenswrapper[4832]: I1204 06:53:19.336640 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" event={"ID":"69a026e8-d207-4ccd-86c7-6e646a80529c","Type":"ContainerDied","Data":"907be3bb2555e151203b107ad96c0981c354fc9746710d4e862d006e999a8a3a"} Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.815940 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.955221 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv4v5\" (UniqueName: \"kubernetes.io/projected/69a026e8-d207-4ccd-86c7-6e646a80529c-kube-api-access-qv4v5\") pod \"69a026e8-d207-4ccd-86c7-6e646a80529c\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.955360 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-1\") pod \"69a026e8-d207-4ccd-86c7-6e646a80529c\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.955384 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-telemetry-combined-ca-bundle\") pod \"69a026e8-d207-4ccd-86c7-6e646a80529c\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.955450 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-2\") pod \"69a026e8-d207-4ccd-86c7-6e646a80529c\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.955489 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-inventory\") pod \"69a026e8-d207-4ccd-86c7-6e646a80529c\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.955648 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ssh-key\") pod \"69a026e8-d207-4ccd-86c7-6e646a80529c\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.955672 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-0\") pod \"69a026e8-d207-4ccd-86c7-6e646a80529c\" (UID: \"69a026e8-d207-4ccd-86c7-6e646a80529c\") " Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.963500 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a026e8-d207-4ccd-86c7-6e646a80529c-kube-api-access-qv4v5" (OuterVolumeSpecName: "kube-api-access-qv4v5") pod "69a026e8-d207-4ccd-86c7-6e646a80529c" (UID: "69a026e8-d207-4ccd-86c7-6e646a80529c"). InnerVolumeSpecName "kube-api-access-qv4v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.976732 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "69a026e8-d207-4ccd-86c7-6e646a80529c" (UID: "69a026e8-d207-4ccd-86c7-6e646a80529c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.992024 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "69a026e8-d207-4ccd-86c7-6e646a80529c" (UID: "69a026e8-d207-4ccd-86c7-6e646a80529c"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.994591 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-inventory" (OuterVolumeSpecName: "inventory") pod "69a026e8-d207-4ccd-86c7-6e646a80529c" (UID: "69a026e8-d207-4ccd-86c7-6e646a80529c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.994638 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "69a026e8-d207-4ccd-86c7-6e646a80529c" (UID: "69a026e8-d207-4ccd-86c7-6e646a80529c"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:53:20 crc kubenswrapper[4832]: I1204 06:53:20.997745 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "69a026e8-d207-4ccd-86c7-6e646a80529c" (UID: "69a026e8-d207-4ccd-86c7-6e646a80529c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:53:21 crc kubenswrapper[4832]: I1204 06:53:21.003322 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "69a026e8-d207-4ccd-86c7-6e646a80529c" (UID: "69a026e8-d207-4ccd-86c7-6e646a80529c"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 06:53:21 crc kubenswrapper[4832]: I1204 06:53:21.058315 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 04 06:53:21 crc kubenswrapper[4832]: I1204 06:53:21.058366 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 06:53:21 crc kubenswrapper[4832]: I1204 06:53:21.058378 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 06:53:21 crc kubenswrapper[4832]: I1204 06:53:21.058545 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 04 06:53:21 crc kubenswrapper[4832]: I1204 06:53:21.058564 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv4v5\" (UniqueName: \"kubernetes.io/projected/69a026e8-d207-4ccd-86c7-6e646a80529c-kube-api-access-qv4v5\") on node \"crc\" DevicePath \"\"" Dec 04 06:53:21 crc kubenswrapper[4832]: I1204 06:53:21.058574 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 04 06:53:21 crc kubenswrapper[4832]: I1204 06:53:21.058583 4832 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a026e8-d207-4ccd-86c7-6e646a80529c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 06:53:21 crc kubenswrapper[4832]: I1204 06:53:21.362629 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" event={"ID":"69a026e8-d207-4ccd-86c7-6e646a80529c","Type":"ContainerDied","Data":"dfcb24d930cbaef18b72e696cd3ded2d4922d156fb7c066ac9d4404b2847cfea"} Dec 04 06:53:21 crc kubenswrapper[4832]: I1204 06:53:21.363027 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfcb24d930cbaef18b72e696cd3ded2d4922d156fb7c066ac9d4404b2847cfea" Dec 04 06:53:21 crc kubenswrapper[4832]: I1204 06:53:21.362773 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p" Dec 04 06:53:35 crc kubenswrapper[4832]: I1204 06:53:35.362916 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:53:35 crc kubenswrapper[4832]: I1204 06:53:35.363915 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:53:35 crc kubenswrapper[4832]: I1204 06:53:35.363996 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:53:35 crc kubenswrapper[4832]: I1204 06:53:35.365203 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac7730c9b11cdb80159176610c49dc8399f4f04cdcae1508d2787a067aba46cd"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 06:53:35 crc kubenswrapper[4832]: I1204 06:53:35.365307 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://ac7730c9b11cdb80159176610c49dc8399f4f04cdcae1508d2787a067aba46cd" gracePeriod=600 Dec 04 06:53:35 crc kubenswrapper[4832]: I1204 06:53:35.550461 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="ac7730c9b11cdb80159176610c49dc8399f4f04cdcae1508d2787a067aba46cd" exitCode=0 Dec 04 06:53:35 crc kubenswrapper[4832]: I1204 06:53:35.550830 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"ac7730c9b11cdb80159176610c49dc8399f4f04cdcae1508d2787a067aba46cd"} Dec 04 06:53:35 crc kubenswrapper[4832]: I1204 06:53:35.550871 4832 scope.go:117] "RemoveContainer" containerID="4bec2683e28c40629d40144217cebd8d3c4cad2c8d57af40fed4e62b576051db" Dec 04 06:53:36 crc kubenswrapper[4832]: I1204 06:53:36.564248 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e"} Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.129179 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 04 06:54:04 crc kubenswrapper[4832]: E1204 06:54:04.133234 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a026e8-d207-4ccd-86c7-6e646a80529c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.133261 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a026e8-d207-4ccd-86c7-6e646a80529c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.133727 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a026e8-d207-4ccd-86c7-6e646a80529c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.134624 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.140698 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.141532 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.141532 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.142140 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-29dvv" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.142174 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.178968 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.179286 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.179367 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/068b63a2-ea9f-4022-8a42-8d345222f5a7-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.179504 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/068b63a2-ea9f-4022-8a42-8d345222f5a7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.179758 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/068b63a2-ea9f-4022-8a42-8d345222f5a7-config-data\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.179834 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/068b63a2-ea9f-4022-8a42-8d345222f5a7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.180195 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.180339 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.180501 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4jlq\" (UniqueName: \"kubernetes.io/projected/068b63a2-ea9f-4022-8a42-8d345222f5a7-kube-api-access-r4jlq\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.282665 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.282736 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.282793 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4jlq\" (UniqueName: \"kubernetes.io/projected/068b63a2-ea9f-4022-8a42-8d345222f5a7-kube-api-access-r4jlq\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.282852 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.282875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.282917 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/068b63a2-ea9f-4022-8a42-8d345222f5a7-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.282965 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/068b63a2-ea9f-4022-8a42-8d345222f5a7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.283004 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/068b63a2-ea9f-4022-8a42-8d345222f5a7-config-data\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.283031 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/068b63a2-ea9f-4022-8a42-8d345222f5a7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.283647 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/068b63a2-ea9f-4022-8a42-8d345222f5a7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.283863 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.284333 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/068b63a2-ea9f-4022-8a42-8d345222f5a7-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.284735 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/068b63a2-ea9f-4022-8a42-8d345222f5a7-config-data\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.284935 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/068b63a2-ea9f-4022-8a42-8d345222f5a7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.292354 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.295213 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.295224 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.306650 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4jlq\" (UniqueName: \"kubernetes.io/projected/068b63a2-ea9f-4022-8a42-8d345222f5a7-kube-api-access-r4jlq\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.337098 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.464205 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.962972 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 04 06:54:04 crc kubenswrapper[4832]: I1204 06:54:04.970818 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 06:54:05 crc kubenswrapper[4832]: I1204 06:54:05.891896 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"068b63a2-ea9f-4022-8a42-8d345222f5a7","Type":"ContainerStarted","Data":"1f3a46cf793436f9249a0dab9a3a4fb180cf5c6c8f54e9749c51124ae41decf1"} Dec 04 06:54:35 crc kubenswrapper[4832]: E1204 06:54:35.248825 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 04 06:54:35 crc kubenswrapper[4832]: E1204 06:54:35.249726 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4jlq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(068b63a2-ea9f-4022-8a42-8d345222f5a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 06:54:35 crc kubenswrapper[4832]: E1204 06:54:35.250997 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="068b63a2-ea9f-4022-8a42-8d345222f5a7" Dec 04 06:54:36 crc kubenswrapper[4832]: E1204 06:54:36.249418 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="068b63a2-ea9f-4022-8a42-8d345222f5a7" Dec 04 06:54:48 crc kubenswrapper[4832]: I1204 06:54:48.149564 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 04 06:54:49 crc kubenswrapper[4832]: I1204 06:54:49.781621 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"068b63a2-ea9f-4022-8a42-8d345222f5a7","Type":"ContainerStarted","Data":"6a1f9bc50e66eddce4a083aec14ade328e00a08bb888af6b70555e57df626be3"} Dec 04 06:54:49 crc kubenswrapper[4832]: I1204 06:54:49.818850 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.642717883 podStartE2EDuration="46.818825644s" podCreationTimestamp="2025-12-04 06:54:03 +0000 UTC" firstStartedPulling="2025-12-04 06:54:04.970542779 +0000 UTC m=+2700.583360475" lastFinishedPulling="2025-12-04 06:54:48.14665053 +0000 UTC m=+2743.759468236" observedRunningTime="2025-12-04 06:54:49.805615572 +0000 UTC m=+2745.418433278" watchObservedRunningTime="2025-12-04 06:54:49.818825644 +0000 UTC m=+2745.431643350" Dec 04 06:55:13 crc kubenswrapper[4832]: I1204 06:55:13.868892 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8hfk8"] Dec 04 06:55:13 crc kubenswrapper[4832]: I1204 06:55:13.872611 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:13 crc kubenswrapper[4832]: I1204 06:55:13.901635 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8hfk8"] Dec 04 06:55:13 crc kubenswrapper[4832]: I1204 06:55:13.964960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-utilities\") pod \"redhat-operators-8hfk8\" (UID: \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\") " pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:13 crc kubenswrapper[4832]: I1204 06:55:13.965104 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-catalog-content\") pod \"redhat-operators-8hfk8\" (UID: \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\") " pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:13 crc kubenswrapper[4832]: I1204 06:55:13.965136 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zcgg\" (UniqueName: \"kubernetes.io/projected/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-kube-api-access-7zcgg\") pod \"redhat-operators-8hfk8\" (UID: \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\") " pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:14 crc kubenswrapper[4832]: I1204 06:55:14.067635 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-catalog-content\") pod \"redhat-operators-8hfk8\" (UID: \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\") " pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:14 crc kubenswrapper[4832]: I1204 06:55:14.067699 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zcgg\" (UniqueName: \"kubernetes.io/projected/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-kube-api-access-7zcgg\") pod \"redhat-operators-8hfk8\" (UID: \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\") " pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:14 crc kubenswrapper[4832]: I1204 06:55:14.067855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-utilities\") pod \"redhat-operators-8hfk8\" (UID: \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\") " pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:14 crc kubenswrapper[4832]: I1204 06:55:14.068520 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-utilities\") pod \"redhat-operators-8hfk8\" (UID: \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\") " pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:14 crc kubenswrapper[4832]: I1204 06:55:14.068757 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-catalog-content\") pod \"redhat-operators-8hfk8\" (UID: \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\") " pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:14 crc kubenswrapper[4832]: I1204 06:55:14.108370 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zcgg\" (UniqueName: \"kubernetes.io/projected/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-kube-api-access-7zcgg\") pod \"redhat-operators-8hfk8\" (UID: \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\") " pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:14 crc kubenswrapper[4832]: I1204 06:55:14.203666 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:14 crc kubenswrapper[4832]: I1204 06:55:14.747289 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8hfk8"] Dec 04 06:55:15 crc kubenswrapper[4832]: I1204 06:55:15.070609 4832 generic.go:334] "Generic (PLEG): container finished" podID="ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" containerID="6540bcfe81ddcf561b0288b8f5e0e231da790068e18e381b33f46c6f32295124" exitCode=0 Dec 04 06:55:15 crc kubenswrapper[4832]: I1204 06:55:15.071234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hfk8" event={"ID":"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a","Type":"ContainerDied","Data":"6540bcfe81ddcf561b0288b8f5e0e231da790068e18e381b33f46c6f32295124"} Dec 04 06:55:15 crc kubenswrapper[4832]: I1204 06:55:15.071276 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hfk8" event={"ID":"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a","Type":"ContainerStarted","Data":"2cead256bd51d30ec6b938638281b29d113261bde9b79c3a83c3581840b98432"} Dec 04 06:55:17 crc kubenswrapper[4832]: I1204 06:55:17.095437 4832 generic.go:334] "Generic (PLEG): container finished" podID="ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" containerID="ed5bc7b1324184ef9d7612bc20746c75b84e411a9ce5d609b4e8d2516db65394" exitCode=0 Dec 04 06:55:17 crc kubenswrapper[4832]: I1204 06:55:17.095510 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hfk8" event={"ID":"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a","Type":"ContainerDied","Data":"ed5bc7b1324184ef9d7612bc20746c75b84e411a9ce5d609b4e8d2516db65394"} Dec 04 06:55:18 crc kubenswrapper[4832]: I1204 06:55:18.112827 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hfk8" event={"ID":"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a","Type":"ContainerStarted","Data":"5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f"} Dec 04 06:55:18 crc kubenswrapper[4832]: I1204 06:55:18.151890 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8hfk8" podStartSLOduration=2.742204542 podStartE2EDuration="5.151868511s" podCreationTimestamp="2025-12-04 06:55:13 +0000 UTC" firstStartedPulling="2025-12-04 06:55:15.079259606 +0000 UTC m=+2770.692077312" lastFinishedPulling="2025-12-04 06:55:17.488923575 +0000 UTC m=+2773.101741281" observedRunningTime="2025-12-04 06:55:18.141767764 +0000 UTC m=+2773.754585490" watchObservedRunningTime="2025-12-04 06:55:18.151868511 +0000 UTC m=+2773.764686217" Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.648882 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzc8"] Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.651961 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.670968 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzc8"] Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.830032 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10975a43-c895-46a9-94ee-16c27b239e04-catalog-content\") pod \"redhat-marketplace-cwzc8\" (UID: \"10975a43-c895-46a9-94ee-16c27b239e04\") " pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.831101 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrdrg\" (UniqueName: \"kubernetes.io/projected/10975a43-c895-46a9-94ee-16c27b239e04-kube-api-access-xrdrg\") pod \"redhat-marketplace-cwzc8\" (UID: \"10975a43-c895-46a9-94ee-16c27b239e04\") " pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.831297 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10975a43-c895-46a9-94ee-16c27b239e04-utilities\") pod \"redhat-marketplace-cwzc8\" (UID: \"10975a43-c895-46a9-94ee-16c27b239e04\") " pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.934051 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10975a43-c895-46a9-94ee-16c27b239e04-utilities\") pod \"redhat-marketplace-cwzc8\" (UID: \"10975a43-c895-46a9-94ee-16c27b239e04\") " pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.934249 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10975a43-c895-46a9-94ee-16c27b239e04-catalog-content\") pod \"redhat-marketplace-cwzc8\" (UID: \"10975a43-c895-46a9-94ee-16c27b239e04\") " pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.934337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrdrg\" (UniqueName: \"kubernetes.io/projected/10975a43-c895-46a9-94ee-16c27b239e04-kube-api-access-xrdrg\") pod \"redhat-marketplace-cwzc8\" (UID: \"10975a43-c895-46a9-94ee-16c27b239e04\") " pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.934879 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10975a43-c895-46a9-94ee-16c27b239e04-utilities\") pod \"redhat-marketplace-cwzc8\" (UID: \"10975a43-c895-46a9-94ee-16c27b239e04\") " pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.934994 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10975a43-c895-46a9-94ee-16c27b239e04-catalog-content\") pod \"redhat-marketplace-cwzc8\" (UID: \"10975a43-c895-46a9-94ee-16c27b239e04\") " pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.961372 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrdrg\" (UniqueName: \"kubernetes.io/projected/10975a43-c895-46a9-94ee-16c27b239e04-kube-api-access-xrdrg\") pod \"redhat-marketplace-cwzc8\" (UID: \"10975a43-c895-46a9-94ee-16c27b239e04\") " pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:22 crc kubenswrapper[4832]: I1204 06:55:22.990633 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:23 crc kubenswrapper[4832]: I1204 06:55:23.778020 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzc8"] Dec 04 06:55:24 crc kubenswrapper[4832]: I1204 06:55:24.203859 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:24 crc kubenswrapper[4832]: I1204 06:55:24.205427 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:24 crc kubenswrapper[4832]: I1204 06:55:24.208934 4832 generic.go:334] "Generic (PLEG): container finished" podID="10975a43-c895-46a9-94ee-16c27b239e04" containerID="ef4955a500bda0c37276eece9d25f3f00b9ce9e555417c3518447aa667a165bf" exitCode=0 Dec 04 06:55:24 crc kubenswrapper[4832]: I1204 06:55:24.209028 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzc8" event={"ID":"10975a43-c895-46a9-94ee-16c27b239e04","Type":"ContainerDied","Data":"ef4955a500bda0c37276eece9d25f3f00b9ce9e555417c3518447aa667a165bf"} Dec 04 06:55:24 crc kubenswrapper[4832]: I1204 06:55:24.209078 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzc8" event={"ID":"10975a43-c895-46a9-94ee-16c27b239e04","Type":"ContainerStarted","Data":"3268f6448156592f203c80ee376d2d38eaa96195c25ce9dedf8254cdfd4cf2b1"} Dec 04 06:55:24 crc kubenswrapper[4832]: I1204 06:55:24.283421 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:25 crc kubenswrapper[4832]: I1204 06:55:25.229341 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzc8" event={"ID":"10975a43-c895-46a9-94ee-16c27b239e04","Type":"ContainerStarted","Data":"d51115a0ccc122f9d4fc518c984cc7b6c900de8a4c54e657ee548de64ef3919c"} Dec 04 06:55:25 crc kubenswrapper[4832]: I1204 06:55:25.289336 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:26 crc kubenswrapper[4832]: I1204 06:55:26.243439 4832 generic.go:334] "Generic (PLEG): container finished" podID="10975a43-c895-46a9-94ee-16c27b239e04" containerID="d51115a0ccc122f9d4fc518c984cc7b6c900de8a4c54e657ee548de64ef3919c" exitCode=0 Dec 04 06:55:26 crc kubenswrapper[4832]: I1204 06:55:26.244422 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzc8" event={"ID":"10975a43-c895-46a9-94ee-16c27b239e04","Type":"ContainerDied","Data":"d51115a0ccc122f9d4fc518c984cc7b6c900de8a4c54e657ee548de64ef3919c"} Dec 04 06:55:26 crc kubenswrapper[4832]: I1204 06:55:26.244478 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzc8" event={"ID":"10975a43-c895-46a9-94ee-16c27b239e04","Type":"ContainerStarted","Data":"0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124"} Dec 04 06:55:26 crc kubenswrapper[4832]: I1204 06:55:26.303086 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cwzc8" podStartSLOduration=2.872655591 podStartE2EDuration="4.303057593s" podCreationTimestamp="2025-12-04 06:55:22 +0000 UTC" firstStartedPulling="2025-12-04 06:55:24.210952145 +0000 UTC m=+2779.823769851" lastFinishedPulling="2025-12-04 06:55:25.641354147 +0000 UTC m=+2781.254171853" observedRunningTime="2025-12-04 06:55:26.295147299 +0000 UTC m=+2781.907965005" watchObservedRunningTime="2025-12-04 06:55:26.303057593 +0000 UTC m=+2781.915875299" Dec 04 06:55:27 crc kubenswrapper[4832]: I1204 06:55:27.241673 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8hfk8"] Dec 04 06:55:28 crc kubenswrapper[4832]: I1204 06:55:28.263656 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8hfk8" podUID="ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" containerName="registry-server" containerID="cri-o://5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f" gracePeriod=2 Dec 04 06:55:28 crc kubenswrapper[4832]: I1204 06:55:28.785481 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:28 crc kubenswrapper[4832]: I1204 06:55:28.904582 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zcgg\" (UniqueName: \"kubernetes.io/projected/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-kube-api-access-7zcgg\") pod \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\" (UID: \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\") " Dec 04 06:55:28 crc kubenswrapper[4832]: I1204 06:55:28.904818 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-utilities\") pod \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\" (UID: \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\") " Dec 04 06:55:28 crc kubenswrapper[4832]: I1204 06:55:28.905002 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-catalog-content\") pod \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\" (UID: \"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a\") " Dec 04 06:55:28 crc kubenswrapper[4832]: I1204 06:55:28.906357 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-utilities" (OuterVolumeSpecName: "utilities") pod "ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" (UID: "ba536ff8-b8dc-4a8f-bbbd-592a4e80983a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:55:28 crc kubenswrapper[4832]: I1204 06:55:28.917088 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-kube-api-access-7zcgg" (OuterVolumeSpecName: "kube-api-access-7zcgg") pod "ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" (UID: "ba536ff8-b8dc-4a8f-bbbd-592a4e80983a"). InnerVolumeSpecName "kube-api-access-7zcgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.004175 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" (UID: "ba536ff8-b8dc-4a8f-bbbd-592a4e80983a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.007840 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.007892 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zcgg\" (UniqueName: \"kubernetes.io/projected/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-kube-api-access-7zcgg\") on node \"crc\" DevicePath \"\"" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.007906 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.276921 4832 generic.go:334] "Generic (PLEG): container finished" podID="ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" containerID="5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f" exitCode=0 Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.276982 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hfk8" event={"ID":"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a","Type":"ContainerDied","Data":"5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f"} Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.277022 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8hfk8" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.277055 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8hfk8" event={"ID":"ba536ff8-b8dc-4a8f-bbbd-592a4e80983a","Type":"ContainerDied","Data":"2cead256bd51d30ec6b938638281b29d113261bde9b79c3a83c3581840b98432"} Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.277076 4832 scope.go:117] "RemoveContainer" containerID="5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.307375 4832 scope.go:117] "RemoveContainer" containerID="ed5bc7b1324184ef9d7612bc20746c75b84e411a9ce5d609b4e8d2516db65394" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.332133 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8hfk8"] Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.343564 4832 scope.go:117] "RemoveContainer" containerID="6540bcfe81ddcf561b0288b8f5e0e231da790068e18e381b33f46c6f32295124" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.343591 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8hfk8"] Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.383259 4832 scope.go:117] "RemoveContainer" containerID="5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f" Dec 04 06:55:29 crc kubenswrapper[4832]: E1204 06:55:29.383945 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f\": container with ID starting with 5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f not found: ID does not exist" containerID="5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.384014 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f"} err="failed to get container status \"5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f\": rpc error: code = NotFound desc = could not find container \"5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f\": container with ID starting with 5a3a4960fdb3959233fb3ea3a70f2eea972fcafa6f2bd3b8052da1a116b6620f not found: ID does not exist" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.384045 4832 scope.go:117] "RemoveContainer" containerID="ed5bc7b1324184ef9d7612bc20746c75b84e411a9ce5d609b4e8d2516db65394" Dec 04 06:55:29 crc kubenswrapper[4832]: E1204 06:55:29.384459 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed5bc7b1324184ef9d7612bc20746c75b84e411a9ce5d609b4e8d2516db65394\": container with ID starting with ed5bc7b1324184ef9d7612bc20746c75b84e411a9ce5d609b4e8d2516db65394 not found: ID does not exist" containerID="ed5bc7b1324184ef9d7612bc20746c75b84e411a9ce5d609b4e8d2516db65394" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.384520 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed5bc7b1324184ef9d7612bc20746c75b84e411a9ce5d609b4e8d2516db65394"} err="failed to get container status \"ed5bc7b1324184ef9d7612bc20746c75b84e411a9ce5d609b4e8d2516db65394\": rpc error: code = NotFound desc = could not find container \"ed5bc7b1324184ef9d7612bc20746c75b84e411a9ce5d609b4e8d2516db65394\": container with ID starting with ed5bc7b1324184ef9d7612bc20746c75b84e411a9ce5d609b4e8d2516db65394 not found: ID does not exist" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.384562 4832 scope.go:117] "RemoveContainer" containerID="6540bcfe81ddcf561b0288b8f5e0e231da790068e18e381b33f46c6f32295124" Dec 04 06:55:29 crc kubenswrapper[4832]: E1204 06:55:29.385035 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6540bcfe81ddcf561b0288b8f5e0e231da790068e18e381b33f46c6f32295124\": container with ID starting with 6540bcfe81ddcf561b0288b8f5e0e231da790068e18e381b33f46c6f32295124 not found: ID does not exist" containerID="6540bcfe81ddcf561b0288b8f5e0e231da790068e18e381b33f46c6f32295124" Dec 04 06:55:29 crc kubenswrapper[4832]: I1204 06:55:29.385061 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6540bcfe81ddcf561b0288b8f5e0e231da790068e18e381b33f46c6f32295124"} err="failed to get container status \"6540bcfe81ddcf561b0288b8f5e0e231da790068e18e381b33f46c6f32295124\": rpc error: code = NotFound desc = could not find container \"6540bcfe81ddcf561b0288b8f5e0e231da790068e18e381b33f46c6f32295124\": container with ID starting with 6540bcfe81ddcf561b0288b8f5e0e231da790068e18e381b33f46c6f32295124 not found: ID does not exist" Dec 04 06:55:30 crc kubenswrapper[4832]: I1204 06:55:30.728335 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" path="/var/lib/kubelet/pods/ba536ff8-b8dc-4a8f-bbbd-592a4e80983a/volumes" Dec 04 06:55:32 crc kubenswrapper[4832]: I1204 06:55:32.991188 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:32 crc kubenswrapper[4832]: I1204 06:55:32.991662 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:33 crc kubenswrapper[4832]: I1204 06:55:33.042650 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:33 crc kubenswrapper[4832]: I1204 06:55:33.402793 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:33 crc kubenswrapper[4832]: I1204 06:55:33.467328 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzc8"] Dec 04 06:55:35 crc kubenswrapper[4832]: I1204 06:55:35.362420 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:55:35 crc kubenswrapper[4832]: I1204 06:55:35.362887 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:55:35 crc kubenswrapper[4832]: I1204 06:55:35.367197 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cwzc8" podUID="10975a43-c895-46a9-94ee-16c27b239e04" containerName="registry-server" containerID="cri-o://0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124" gracePeriod=2 Dec 04 06:55:35 crc kubenswrapper[4832]: I1204 06:55:35.909191 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.020040 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10975a43-c895-46a9-94ee-16c27b239e04-utilities\") pod \"10975a43-c895-46a9-94ee-16c27b239e04\" (UID: \"10975a43-c895-46a9-94ee-16c27b239e04\") " Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.020596 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrdrg\" (UniqueName: \"kubernetes.io/projected/10975a43-c895-46a9-94ee-16c27b239e04-kube-api-access-xrdrg\") pod \"10975a43-c895-46a9-94ee-16c27b239e04\" (UID: \"10975a43-c895-46a9-94ee-16c27b239e04\") " Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.020640 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10975a43-c895-46a9-94ee-16c27b239e04-catalog-content\") pod \"10975a43-c895-46a9-94ee-16c27b239e04\" (UID: \"10975a43-c895-46a9-94ee-16c27b239e04\") " Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.022746 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10975a43-c895-46a9-94ee-16c27b239e04-utilities" (OuterVolumeSpecName: "utilities") pod "10975a43-c895-46a9-94ee-16c27b239e04" (UID: "10975a43-c895-46a9-94ee-16c27b239e04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.034790 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10975a43-c895-46a9-94ee-16c27b239e04-kube-api-access-xrdrg" (OuterVolumeSpecName: "kube-api-access-xrdrg") pod "10975a43-c895-46a9-94ee-16c27b239e04" (UID: "10975a43-c895-46a9-94ee-16c27b239e04"). InnerVolumeSpecName "kube-api-access-xrdrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.050139 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10975a43-c895-46a9-94ee-16c27b239e04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10975a43-c895-46a9-94ee-16c27b239e04" (UID: "10975a43-c895-46a9-94ee-16c27b239e04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.123287 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10975a43-c895-46a9-94ee-16c27b239e04-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.123841 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrdrg\" (UniqueName: \"kubernetes.io/projected/10975a43-c895-46a9-94ee-16c27b239e04-kube-api-access-xrdrg\") on node \"crc\" DevicePath \"\"" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.123860 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10975a43-c895-46a9-94ee-16c27b239e04-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.380674 4832 generic.go:334] "Generic (PLEG): container finished" podID="10975a43-c895-46a9-94ee-16c27b239e04" containerID="0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124" exitCode=0 Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.380749 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzc8" event={"ID":"10975a43-c895-46a9-94ee-16c27b239e04","Type":"ContainerDied","Data":"0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124"} Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.380795 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzc8" event={"ID":"10975a43-c895-46a9-94ee-16c27b239e04","Type":"ContainerDied","Data":"3268f6448156592f203c80ee376d2d38eaa96195c25ce9dedf8254cdfd4cf2b1"} Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.380795 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwzc8" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.380831 4832 scope.go:117] "RemoveContainer" containerID="0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.404703 4832 scope.go:117] "RemoveContainer" containerID="d51115a0ccc122f9d4fc518c984cc7b6c900de8a4c54e657ee548de64ef3919c" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.421596 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzc8"] Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.430887 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzc8"] Dec 04 06:55:36 crc kubenswrapper[4832]: E1204 06:55:36.434667 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10975a43_c895_46a9_94ee_16c27b239e04.slice\": RecentStats: unable to find data in memory cache]" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.456437 4832 scope.go:117] "RemoveContainer" containerID="ef4955a500bda0c37276eece9d25f3f00b9ce9e555417c3518447aa667a165bf" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.482323 4832 scope.go:117] "RemoveContainer" containerID="0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124" Dec 04 06:55:36 crc kubenswrapper[4832]: E1204 06:55:36.483073 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124\": container with ID starting with 0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124 not found: ID does not exist" containerID="0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.483113 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124"} err="failed to get container status \"0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124\": rpc error: code = NotFound desc = could not find container \"0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124\": container with ID starting with 0f6bc7fa131698de7fd563c95c56f312d1891af32f4b3426bc23e1be7c08d124 not found: ID does not exist" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.483142 4832 scope.go:117] "RemoveContainer" containerID="d51115a0ccc122f9d4fc518c984cc7b6c900de8a4c54e657ee548de64ef3919c" Dec 04 06:55:36 crc kubenswrapper[4832]: E1204 06:55:36.483792 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d51115a0ccc122f9d4fc518c984cc7b6c900de8a4c54e657ee548de64ef3919c\": container with ID starting with d51115a0ccc122f9d4fc518c984cc7b6c900de8a4c54e657ee548de64ef3919c not found: ID does not exist" containerID="d51115a0ccc122f9d4fc518c984cc7b6c900de8a4c54e657ee548de64ef3919c" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.483840 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d51115a0ccc122f9d4fc518c984cc7b6c900de8a4c54e657ee548de64ef3919c"} err="failed to get container status \"d51115a0ccc122f9d4fc518c984cc7b6c900de8a4c54e657ee548de64ef3919c\": rpc error: code = NotFound desc = could not find container \"d51115a0ccc122f9d4fc518c984cc7b6c900de8a4c54e657ee548de64ef3919c\": container with ID starting with d51115a0ccc122f9d4fc518c984cc7b6c900de8a4c54e657ee548de64ef3919c not found: ID does not exist" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.483862 4832 scope.go:117] "RemoveContainer" containerID="ef4955a500bda0c37276eece9d25f3f00b9ce9e555417c3518447aa667a165bf" Dec 04 06:55:36 crc kubenswrapper[4832]: E1204 06:55:36.484128 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef4955a500bda0c37276eece9d25f3f00b9ce9e555417c3518447aa667a165bf\": container with ID starting with ef4955a500bda0c37276eece9d25f3f00b9ce9e555417c3518447aa667a165bf not found: ID does not exist" containerID="ef4955a500bda0c37276eece9d25f3f00b9ce9e555417c3518447aa667a165bf" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.484155 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4955a500bda0c37276eece9d25f3f00b9ce9e555417c3518447aa667a165bf"} err="failed to get container status \"ef4955a500bda0c37276eece9d25f3f00b9ce9e555417c3518447aa667a165bf\": rpc error: code = NotFound desc = could not find container \"ef4955a500bda0c37276eece9d25f3f00b9ce9e555417c3518447aa667a165bf\": container with ID starting with ef4955a500bda0c37276eece9d25f3f00b9ce9e555417c3518447aa667a165bf not found: ID does not exist" Dec 04 06:55:36 crc kubenswrapper[4832]: I1204 06:55:36.726695 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10975a43-c895-46a9-94ee-16c27b239e04" path="/var/lib/kubelet/pods/10975a43-c895-46a9-94ee-16c27b239e04/volumes" Dec 04 06:56:05 crc kubenswrapper[4832]: I1204 06:56:05.363092 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:56:05 crc kubenswrapper[4832]: I1204 06:56:05.363848 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:56:35 crc kubenswrapper[4832]: I1204 06:56:35.362994 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 06:56:35 crc kubenswrapper[4832]: I1204 06:56:35.363738 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 06:56:35 crc kubenswrapper[4832]: I1204 06:56:35.363791 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 06:56:35 crc kubenswrapper[4832]: I1204 06:56:35.364497 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 06:56:35 crc kubenswrapper[4832]: I1204 06:56:35.364578 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" gracePeriod=600 Dec 04 06:56:35 crc kubenswrapper[4832]: E1204 06:56:35.499417 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:56:35 crc kubenswrapper[4832]: I1204 06:56:35.996133 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" exitCode=0 Dec 04 06:56:35 crc kubenswrapper[4832]: I1204 06:56:35.996206 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e"} Dec 04 06:56:35 crc kubenswrapper[4832]: I1204 06:56:35.996294 4832 scope.go:117] "RemoveContainer" containerID="ac7730c9b11cdb80159176610c49dc8399f4f04cdcae1508d2787a067aba46cd" Dec 04 06:56:35 crc kubenswrapper[4832]: I1204 06:56:35.997299 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:56:35 crc kubenswrapper[4832]: E1204 06:56:35.997744 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:56:47 crc kubenswrapper[4832]: I1204 06:56:47.711000 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:56:47 crc kubenswrapper[4832]: E1204 06:56:47.711755 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:56:59 crc kubenswrapper[4832]: I1204 06:56:59.710557 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:56:59 crc kubenswrapper[4832]: E1204 06:56:59.711871 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.653418 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qs946"] Dec 04 06:57:05 crc kubenswrapper[4832]: E1204 06:57:05.655331 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" containerName="registry-server" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.655349 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" containerName="registry-server" Dec 04 06:57:05 crc kubenswrapper[4832]: E1204 06:57:05.655360 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" containerName="extract-utilities" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.655367 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" containerName="extract-utilities" Dec 04 06:57:05 crc kubenswrapper[4832]: E1204 06:57:05.655379 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10975a43-c895-46a9-94ee-16c27b239e04" containerName="extract-utilities" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.655386 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="10975a43-c895-46a9-94ee-16c27b239e04" containerName="extract-utilities" Dec 04 06:57:05 crc kubenswrapper[4832]: E1204 06:57:05.655414 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10975a43-c895-46a9-94ee-16c27b239e04" containerName="registry-server" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.655422 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="10975a43-c895-46a9-94ee-16c27b239e04" containerName="registry-server" Dec 04 06:57:05 crc kubenswrapper[4832]: E1204 06:57:05.655435 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" containerName="extract-content" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.655441 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" containerName="extract-content" Dec 04 06:57:05 crc kubenswrapper[4832]: E1204 06:57:05.655460 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10975a43-c895-46a9-94ee-16c27b239e04" containerName="extract-content" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.655465 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="10975a43-c895-46a9-94ee-16c27b239e04" containerName="extract-content" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.655693 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="10975a43-c895-46a9-94ee-16c27b239e04" containerName="registry-server" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.655713 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba536ff8-b8dc-4a8f-bbbd-592a4e80983a" containerName="registry-server" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.657668 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.678960 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qs946"] Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.681356 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02dd48bd-09d0-4189-b3fd-3f106abf3473-catalog-content\") pod \"community-operators-qs946\" (UID: \"02dd48bd-09d0-4189-b3fd-3f106abf3473\") " pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.681785 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02dd48bd-09d0-4189-b3fd-3f106abf3473-utilities\") pod \"community-operators-qs946\" (UID: \"02dd48bd-09d0-4189-b3fd-3f106abf3473\") " pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.681971 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8pgp\" (UniqueName: \"kubernetes.io/projected/02dd48bd-09d0-4189-b3fd-3f106abf3473-kube-api-access-z8pgp\") pod \"community-operators-qs946\" (UID: \"02dd48bd-09d0-4189-b3fd-3f106abf3473\") " pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.785110 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02dd48bd-09d0-4189-b3fd-3f106abf3473-catalog-content\") pod \"community-operators-qs946\" (UID: \"02dd48bd-09d0-4189-b3fd-3f106abf3473\") " pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.785280 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02dd48bd-09d0-4189-b3fd-3f106abf3473-utilities\") pod \"community-operators-qs946\" (UID: \"02dd48bd-09d0-4189-b3fd-3f106abf3473\") " pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.785328 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8pgp\" (UniqueName: \"kubernetes.io/projected/02dd48bd-09d0-4189-b3fd-3f106abf3473-kube-api-access-z8pgp\") pod \"community-operators-qs946\" (UID: \"02dd48bd-09d0-4189-b3fd-3f106abf3473\") " pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.786037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02dd48bd-09d0-4189-b3fd-3f106abf3473-catalog-content\") pod \"community-operators-qs946\" (UID: \"02dd48bd-09d0-4189-b3fd-3f106abf3473\") " pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.786070 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02dd48bd-09d0-4189-b3fd-3f106abf3473-utilities\") pod \"community-operators-qs946\" (UID: \"02dd48bd-09d0-4189-b3fd-3f106abf3473\") " pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:05 crc kubenswrapper[4832]: I1204 06:57:05.818336 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8pgp\" (UniqueName: \"kubernetes.io/projected/02dd48bd-09d0-4189-b3fd-3f106abf3473-kube-api-access-z8pgp\") pod \"community-operators-qs946\" (UID: \"02dd48bd-09d0-4189-b3fd-3f106abf3473\") " pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:06 crc kubenswrapper[4832]: I1204 06:57:06.014412 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:06 crc kubenswrapper[4832]: I1204 06:57:06.673978 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qs946"] Dec 04 06:57:07 crc kubenswrapper[4832]: I1204 06:57:07.384426 4832 generic.go:334] "Generic (PLEG): container finished" podID="02dd48bd-09d0-4189-b3fd-3f106abf3473" containerID="a27159f17e2105eb9ccd1d0f786f9b3e2282f62bdbfb9a2a9eda6885d2589a31" exitCode=0 Dec 04 06:57:07 crc kubenswrapper[4832]: I1204 06:57:07.384541 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs946" event={"ID":"02dd48bd-09d0-4189-b3fd-3f106abf3473","Type":"ContainerDied","Data":"a27159f17e2105eb9ccd1d0f786f9b3e2282f62bdbfb9a2a9eda6885d2589a31"} Dec 04 06:57:07 crc kubenswrapper[4832]: I1204 06:57:07.385104 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs946" event={"ID":"02dd48bd-09d0-4189-b3fd-3f106abf3473","Type":"ContainerStarted","Data":"117422674f8e982986705ca587400746dc4f4507dc2ca15a88a3521d7f3b2434"} Dec 04 06:57:08 crc kubenswrapper[4832]: I1204 06:57:08.397267 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs946" event={"ID":"02dd48bd-09d0-4189-b3fd-3f106abf3473","Type":"ContainerStarted","Data":"13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb"} Dec 04 06:57:08 crc kubenswrapper[4832]: E1204 06:57:08.759352 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02dd48bd_09d0_4189_b3fd_3f106abf3473.slice/crio-conmon-13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb.scope\": RecentStats: unable to find data in memory cache]" Dec 04 06:57:09 crc kubenswrapper[4832]: I1204 06:57:09.407333 4832 generic.go:334] "Generic (PLEG): container finished" podID="02dd48bd-09d0-4189-b3fd-3f106abf3473" containerID="13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb" exitCode=0 Dec 04 06:57:09 crc kubenswrapper[4832]: I1204 06:57:09.407454 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs946" event={"ID":"02dd48bd-09d0-4189-b3fd-3f106abf3473","Type":"ContainerDied","Data":"13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb"} Dec 04 06:57:10 crc kubenswrapper[4832]: I1204 06:57:10.419535 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs946" event={"ID":"02dd48bd-09d0-4189-b3fd-3f106abf3473","Type":"ContainerStarted","Data":"5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443"} Dec 04 06:57:10 crc kubenswrapper[4832]: I1204 06:57:10.442016 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qs946" podStartSLOduration=3.004706323 podStartE2EDuration="5.441994136s" podCreationTimestamp="2025-12-04 06:57:05 +0000 UTC" firstStartedPulling="2025-12-04 06:57:07.387757161 +0000 UTC m=+2883.000574867" lastFinishedPulling="2025-12-04 06:57:09.825044974 +0000 UTC m=+2885.437862680" observedRunningTime="2025-12-04 06:57:10.439346451 +0000 UTC m=+2886.052164157" watchObservedRunningTime="2025-12-04 06:57:10.441994136 +0000 UTC m=+2886.054811842" Dec 04 06:57:10 crc kubenswrapper[4832]: I1204 06:57:10.710683 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:57:10 crc kubenswrapper[4832]: E1204 06:57:10.710967 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:57:16 crc kubenswrapper[4832]: I1204 06:57:16.016417 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:16 crc kubenswrapper[4832]: I1204 06:57:16.016872 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:16 crc kubenswrapper[4832]: I1204 06:57:16.071372 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:16 crc kubenswrapper[4832]: I1204 06:57:16.523416 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:16 crc kubenswrapper[4832]: I1204 06:57:16.575252 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qs946"] Dec 04 06:57:18 crc kubenswrapper[4832]: I1204 06:57:18.498361 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qs946" podUID="02dd48bd-09d0-4189-b3fd-3f106abf3473" containerName="registry-server" containerID="cri-o://5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443" gracePeriod=2 Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.099378 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.167133 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02dd48bd-09d0-4189-b3fd-3f106abf3473-utilities\") pod \"02dd48bd-09d0-4189-b3fd-3f106abf3473\" (UID: \"02dd48bd-09d0-4189-b3fd-3f106abf3473\") " Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.167206 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8pgp\" (UniqueName: \"kubernetes.io/projected/02dd48bd-09d0-4189-b3fd-3f106abf3473-kube-api-access-z8pgp\") pod \"02dd48bd-09d0-4189-b3fd-3f106abf3473\" (UID: \"02dd48bd-09d0-4189-b3fd-3f106abf3473\") " Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.167267 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02dd48bd-09d0-4189-b3fd-3f106abf3473-catalog-content\") pod \"02dd48bd-09d0-4189-b3fd-3f106abf3473\" (UID: \"02dd48bd-09d0-4189-b3fd-3f106abf3473\") " Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.168406 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02dd48bd-09d0-4189-b3fd-3f106abf3473-utilities" (OuterVolumeSpecName: "utilities") pod "02dd48bd-09d0-4189-b3fd-3f106abf3473" (UID: "02dd48bd-09d0-4189-b3fd-3f106abf3473"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.178666 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02dd48bd-09d0-4189-b3fd-3f106abf3473-kube-api-access-z8pgp" (OuterVolumeSpecName: "kube-api-access-z8pgp") pod "02dd48bd-09d0-4189-b3fd-3f106abf3473" (UID: "02dd48bd-09d0-4189-b3fd-3f106abf3473"). InnerVolumeSpecName "kube-api-access-z8pgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.230976 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02dd48bd-09d0-4189-b3fd-3f106abf3473-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02dd48bd-09d0-4189-b3fd-3f106abf3473" (UID: "02dd48bd-09d0-4189-b3fd-3f106abf3473"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.269596 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02dd48bd-09d0-4189-b3fd-3f106abf3473-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.269649 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8pgp\" (UniqueName: \"kubernetes.io/projected/02dd48bd-09d0-4189-b3fd-3f106abf3473-kube-api-access-z8pgp\") on node \"crc\" DevicePath \"\"" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.269666 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02dd48bd-09d0-4189-b3fd-3f106abf3473-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.509237 4832 generic.go:334] "Generic (PLEG): container finished" podID="02dd48bd-09d0-4189-b3fd-3f106abf3473" containerID="5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443" exitCode=0 Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.509283 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs946" event={"ID":"02dd48bd-09d0-4189-b3fd-3f106abf3473","Type":"ContainerDied","Data":"5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443"} Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.509313 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs946" event={"ID":"02dd48bd-09d0-4189-b3fd-3f106abf3473","Type":"ContainerDied","Data":"117422674f8e982986705ca587400746dc4f4507dc2ca15a88a3521d7f3b2434"} Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.509338 4832 scope.go:117] "RemoveContainer" containerID="5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.509340 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qs946" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.542790 4832 scope.go:117] "RemoveContainer" containerID="13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.553853 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qs946"] Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.572126 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qs946"] Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.578934 4832 scope.go:117] "RemoveContainer" containerID="a27159f17e2105eb9ccd1d0f786f9b3e2282f62bdbfb9a2a9eda6885d2589a31" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.639038 4832 scope.go:117] "RemoveContainer" containerID="5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443" Dec 04 06:57:19 crc kubenswrapper[4832]: E1204 06:57:19.639984 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443\": container with ID starting with 5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443 not found: ID does not exist" containerID="5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.640052 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443"} err="failed to get container status \"5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443\": rpc error: code = NotFound desc = could not find container \"5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443\": container with ID starting with 5c2673131b64f7e91fea7477e253a271bbe9e8bc987f3089effa8c9a9d795443 not found: ID does not exist" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.640094 4832 scope.go:117] "RemoveContainer" containerID="13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb" Dec 04 06:57:19 crc kubenswrapper[4832]: E1204 06:57:19.640431 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb\": container with ID starting with 13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb not found: ID does not exist" containerID="13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.640455 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb"} err="failed to get container status \"13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb\": rpc error: code = NotFound desc = could not find container \"13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb\": container with ID starting with 13ea1f10e105d09392a96f37f065a565dd74293c1858a0fc5ae1412bffa994cb not found: ID does not exist" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.640469 4832 scope.go:117] "RemoveContainer" containerID="a27159f17e2105eb9ccd1d0f786f9b3e2282f62bdbfb9a2a9eda6885d2589a31" Dec 04 06:57:19 crc kubenswrapper[4832]: E1204 06:57:19.640737 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a27159f17e2105eb9ccd1d0f786f9b3e2282f62bdbfb9a2a9eda6885d2589a31\": container with ID starting with a27159f17e2105eb9ccd1d0f786f9b3e2282f62bdbfb9a2a9eda6885d2589a31 not found: ID does not exist" containerID="a27159f17e2105eb9ccd1d0f786f9b3e2282f62bdbfb9a2a9eda6885d2589a31" Dec 04 06:57:19 crc kubenswrapper[4832]: I1204 06:57:19.640772 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27159f17e2105eb9ccd1d0f786f9b3e2282f62bdbfb9a2a9eda6885d2589a31"} err="failed to get container status \"a27159f17e2105eb9ccd1d0f786f9b3e2282f62bdbfb9a2a9eda6885d2589a31\": rpc error: code = NotFound desc = could not find container \"a27159f17e2105eb9ccd1d0f786f9b3e2282f62bdbfb9a2a9eda6885d2589a31\": container with ID starting with a27159f17e2105eb9ccd1d0f786f9b3e2282f62bdbfb9a2a9eda6885d2589a31 not found: ID does not exist" Dec 04 06:57:20 crc kubenswrapper[4832]: I1204 06:57:20.724033 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02dd48bd-09d0-4189-b3fd-3f106abf3473" path="/var/lib/kubelet/pods/02dd48bd-09d0-4189-b3fd-3f106abf3473/volumes" Dec 04 06:57:21 crc kubenswrapper[4832]: I1204 06:57:21.710573 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:57:21 crc kubenswrapper[4832]: E1204 06:57:21.711113 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:57:33 crc kubenswrapper[4832]: I1204 06:57:33.711678 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:57:33 crc kubenswrapper[4832]: E1204 06:57:33.712521 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:57:45 crc kubenswrapper[4832]: I1204 06:57:45.711273 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:57:45 crc kubenswrapper[4832]: E1204 06:57:45.712183 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:57:59 crc kubenswrapper[4832]: I1204 06:57:59.711561 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:57:59 crc kubenswrapper[4832]: E1204 06:57:59.712731 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:58:13 crc kubenswrapper[4832]: I1204 06:58:13.711292 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:58:13 crc kubenswrapper[4832]: E1204 06:58:13.712258 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:58:25 crc kubenswrapper[4832]: I1204 06:58:25.711270 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:58:25 crc kubenswrapper[4832]: E1204 06:58:25.712607 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:58:36 crc kubenswrapper[4832]: I1204 06:58:36.710450 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:58:36 crc kubenswrapper[4832]: E1204 06:58:36.711624 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:58:50 crc kubenswrapper[4832]: I1204 06:58:50.710615 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:58:50 crc kubenswrapper[4832]: E1204 06:58:50.711540 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:59:03 crc kubenswrapper[4832]: I1204 06:59:03.712226 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:59:03 crc kubenswrapper[4832]: E1204 06:59:03.713930 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:59:15 crc kubenswrapper[4832]: I1204 06:59:15.711048 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:59:15 crc kubenswrapper[4832]: E1204 06:59:15.712008 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:59:27 crc kubenswrapper[4832]: I1204 06:59:27.711124 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:59:27 crc kubenswrapper[4832]: E1204 06:59:27.713026 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.460763 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s54vr"] Dec 04 06:59:29 crc kubenswrapper[4832]: E1204 06:59:29.461587 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd48bd-09d0-4189-b3fd-3f106abf3473" containerName="extract-utilities" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.461601 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd48bd-09d0-4189-b3fd-3f106abf3473" containerName="extract-utilities" Dec 04 06:59:29 crc kubenswrapper[4832]: E1204 06:59:29.461631 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd48bd-09d0-4189-b3fd-3f106abf3473" containerName="registry-server" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.461638 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd48bd-09d0-4189-b3fd-3f106abf3473" containerName="registry-server" Dec 04 06:59:29 crc kubenswrapper[4832]: E1204 06:59:29.461651 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd48bd-09d0-4189-b3fd-3f106abf3473" containerName="extract-content" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.461657 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd48bd-09d0-4189-b3fd-3f106abf3473" containerName="extract-content" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.461881 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd48bd-09d0-4189-b3fd-3f106abf3473" containerName="registry-server" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.463281 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.476816 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s54vr"] Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.587228 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98fc3f4-6924-4a02-ac99-07871184acdb-utilities\") pod \"certified-operators-s54vr\" (UID: \"d98fc3f4-6924-4a02-ac99-07871184acdb\") " pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.587443 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98fc3f4-6924-4a02-ac99-07871184acdb-catalog-content\") pod \"certified-operators-s54vr\" (UID: \"d98fc3f4-6924-4a02-ac99-07871184acdb\") " pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.587483 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prf84\" (UniqueName: \"kubernetes.io/projected/d98fc3f4-6924-4a02-ac99-07871184acdb-kube-api-access-prf84\") pod \"certified-operators-s54vr\" (UID: \"d98fc3f4-6924-4a02-ac99-07871184acdb\") " pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.689673 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98fc3f4-6924-4a02-ac99-07871184acdb-utilities\") pod \"certified-operators-s54vr\" (UID: \"d98fc3f4-6924-4a02-ac99-07871184acdb\") " pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.689820 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98fc3f4-6924-4a02-ac99-07871184acdb-catalog-content\") pod \"certified-operators-s54vr\" (UID: \"d98fc3f4-6924-4a02-ac99-07871184acdb\") " pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.689854 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prf84\" (UniqueName: \"kubernetes.io/projected/d98fc3f4-6924-4a02-ac99-07871184acdb-kube-api-access-prf84\") pod \"certified-operators-s54vr\" (UID: \"d98fc3f4-6924-4a02-ac99-07871184acdb\") " pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.690558 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98fc3f4-6924-4a02-ac99-07871184acdb-catalog-content\") pod \"certified-operators-s54vr\" (UID: \"d98fc3f4-6924-4a02-ac99-07871184acdb\") " pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.690549 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98fc3f4-6924-4a02-ac99-07871184acdb-utilities\") pod \"certified-operators-s54vr\" (UID: \"d98fc3f4-6924-4a02-ac99-07871184acdb\") " pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.712056 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prf84\" (UniqueName: \"kubernetes.io/projected/d98fc3f4-6924-4a02-ac99-07871184acdb-kube-api-access-prf84\") pod \"certified-operators-s54vr\" (UID: \"d98fc3f4-6924-4a02-ac99-07871184acdb\") " pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:29 crc kubenswrapper[4832]: I1204 06:59:29.806913 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:30 crc kubenswrapper[4832]: I1204 06:59:30.361841 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s54vr"] Dec 04 06:59:30 crc kubenswrapper[4832]: I1204 06:59:30.883207 4832 generic.go:334] "Generic (PLEG): container finished" podID="d98fc3f4-6924-4a02-ac99-07871184acdb" containerID="04030ac2d392f4e6929094d913baa5f7bae96894eb38b0781d2fd35787e1c055" exitCode=0 Dec 04 06:59:30 crc kubenswrapper[4832]: I1204 06:59:30.883442 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s54vr" event={"ID":"d98fc3f4-6924-4a02-ac99-07871184acdb","Type":"ContainerDied","Data":"04030ac2d392f4e6929094d913baa5f7bae96894eb38b0781d2fd35787e1c055"} Dec 04 06:59:30 crc kubenswrapper[4832]: I1204 06:59:30.883715 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s54vr" event={"ID":"d98fc3f4-6924-4a02-ac99-07871184acdb","Type":"ContainerStarted","Data":"bf3f6959f24cb4603757ac37ddd2a6e20d463db5d28d201cc582dda9538e08ba"} Dec 04 06:59:30 crc kubenswrapper[4832]: I1204 06:59:30.886258 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 06:59:31 crc kubenswrapper[4832]: I1204 06:59:31.895852 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s54vr" event={"ID":"d98fc3f4-6924-4a02-ac99-07871184acdb","Type":"ContainerStarted","Data":"3d2957a1ac2564e3217c7b287e82ee7367b2fb4d358f203170e9f8aea34e2861"} Dec 04 06:59:32 crc kubenswrapper[4832]: I1204 06:59:32.912566 4832 generic.go:334] "Generic (PLEG): container finished" podID="d98fc3f4-6924-4a02-ac99-07871184acdb" containerID="3d2957a1ac2564e3217c7b287e82ee7367b2fb4d358f203170e9f8aea34e2861" exitCode=0 Dec 04 06:59:32 crc kubenswrapper[4832]: I1204 06:59:32.912690 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s54vr" event={"ID":"d98fc3f4-6924-4a02-ac99-07871184acdb","Type":"ContainerDied","Data":"3d2957a1ac2564e3217c7b287e82ee7367b2fb4d358f203170e9f8aea34e2861"} Dec 04 06:59:33 crc kubenswrapper[4832]: I1204 06:59:33.927012 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s54vr" event={"ID":"d98fc3f4-6924-4a02-ac99-07871184acdb","Type":"ContainerStarted","Data":"3cb2506af5282cb1a482839151f8a2c99585326cd980498474b8fd6cd135a9b9"} Dec 04 06:59:33 crc kubenswrapper[4832]: I1204 06:59:33.947004 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s54vr" podStartSLOduration=2.47873713 podStartE2EDuration="4.946974255s" podCreationTimestamp="2025-12-04 06:59:29 +0000 UTC" firstStartedPulling="2025-12-04 06:59:30.885890714 +0000 UTC m=+3026.498708420" lastFinishedPulling="2025-12-04 06:59:33.354127819 +0000 UTC m=+3028.966945545" observedRunningTime="2025-12-04 06:59:33.945016917 +0000 UTC m=+3029.557834643" watchObservedRunningTime="2025-12-04 06:59:33.946974255 +0000 UTC m=+3029.559791961" Dec 04 06:59:39 crc kubenswrapper[4832]: I1204 06:59:39.712236 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:59:39 crc kubenswrapper[4832]: E1204 06:59:39.713932 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 06:59:39 crc kubenswrapper[4832]: I1204 06:59:39.807930 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:39 crc kubenswrapper[4832]: I1204 06:59:39.808000 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:39 crc kubenswrapper[4832]: I1204 06:59:39.863804 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:40 crc kubenswrapper[4832]: I1204 06:59:40.060749 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:40 crc kubenswrapper[4832]: I1204 06:59:40.115085 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s54vr"] Dec 04 06:59:42 crc kubenswrapper[4832]: I1204 06:59:42.033889 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s54vr" podUID="d98fc3f4-6924-4a02-ac99-07871184acdb" containerName="registry-server" containerID="cri-o://3cb2506af5282cb1a482839151f8a2c99585326cd980498474b8fd6cd135a9b9" gracePeriod=2 Dec 04 06:59:43 crc kubenswrapper[4832]: I1204 06:59:43.055248 4832 generic.go:334] "Generic (PLEG): container finished" podID="d98fc3f4-6924-4a02-ac99-07871184acdb" containerID="3cb2506af5282cb1a482839151f8a2c99585326cd980498474b8fd6cd135a9b9" exitCode=0 Dec 04 06:59:43 crc kubenswrapper[4832]: I1204 06:59:43.055346 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s54vr" event={"ID":"d98fc3f4-6924-4a02-ac99-07871184acdb","Type":"ContainerDied","Data":"3cb2506af5282cb1a482839151f8a2c99585326cd980498474b8fd6cd135a9b9"} Dec 04 06:59:43 crc kubenswrapper[4832]: I1204 06:59:43.245645 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:43 crc kubenswrapper[4832]: I1204 06:59:43.311197 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98fc3f4-6924-4a02-ac99-07871184acdb-utilities\") pod \"d98fc3f4-6924-4a02-ac99-07871184acdb\" (UID: \"d98fc3f4-6924-4a02-ac99-07871184acdb\") " Dec 04 06:59:43 crc kubenswrapper[4832]: I1204 06:59:43.311246 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prf84\" (UniqueName: \"kubernetes.io/projected/d98fc3f4-6924-4a02-ac99-07871184acdb-kube-api-access-prf84\") pod \"d98fc3f4-6924-4a02-ac99-07871184acdb\" (UID: \"d98fc3f4-6924-4a02-ac99-07871184acdb\") " Dec 04 06:59:43 crc kubenswrapper[4832]: I1204 06:59:43.311499 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98fc3f4-6924-4a02-ac99-07871184acdb-catalog-content\") pod \"d98fc3f4-6924-4a02-ac99-07871184acdb\" (UID: \"d98fc3f4-6924-4a02-ac99-07871184acdb\") " Dec 04 06:59:43 crc kubenswrapper[4832]: I1204 06:59:43.316038 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d98fc3f4-6924-4a02-ac99-07871184acdb-utilities" (OuterVolumeSpecName: "utilities") pod "d98fc3f4-6924-4a02-ac99-07871184acdb" (UID: "d98fc3f4-6924-4a02-ac99-07871184acdb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:59:43 crc kubenswrapper[4832]: I1204 06:59:43.342822 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d98fc3f4-6924-4a02-ac99-07871184acdb-kube-api-access-prf84" (OuterVolumeSpecName: "kube-api-access-prf84") pod "d98fc3f4-6924-4a02-ac99-07871184acdb" (UID: "d98fc3f4-6924-4a02-ac99-07871184acdb"). InnerVolumeSpecName "kube-api-access-prf84". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 06:59:43 crc kubenswrapper[4832]: I1204 06:59:43.366043 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d98fc3f4-6924-4a02-ac99-07871184acdb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d98fc3f4-6924-4a02-ac99-07871184acdb" (UID: "d98fc3f4-6924-4a02-ac99-07871184acdb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 06:59:43 crc kubenswrapper[4832]: I1204 06:59:43.413956 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98fc3f4-6924-4a02-ac99-07871184acdb-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 06:59:43 crc kubenswrapper[4832]: I1204 06:59:43.414036 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prf84\" (UniqueName: \"kubernetes.io/projected/d98fc3f4-6924-4a02-ac99-07871184acdb-kube-api-access-prf84\") on node \"crc\" DevicePath \"\"" Dec 04 06:59:43 crc kubenswrapper[4832]: I1204 06:59:43.414047 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98fc3f4-6924-4a02-ac99-07871184acdb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 06:59:44 crc kubenswrapper[4832]: I1204 06:59:44.071098 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s54vr" event={"ID":"d98fc3f4-6924-4a02-ac99-07871184acdb","Type":"ContainerDied","Data":"bf3f6959f24cb4603757ac37ddd2a6e20d463db5d28d201cc582dda9538e08ba"} Dec 04 06:59:44 crc kubenswrapper[4832]: I1204 06:59:44.071466 4832 scope.go:117] "RemoveContainer" containerID="3cb2506af5282cb1a482839151f8a2c99585326cd980498474b8fd6cd135a9b9" Dec 04 06:59:44 crc kubenswrapper[4832]: I1204 06:59:44.071304 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s54vr" Dec 04 06:59:44 crc kubenswrapper[4832]: I1204 06:59:44.094497 4832 scope.go:117] "RemoveContainer" containerID="3d2957a1ac2564e3217c7b287e82ee7367b2fb4d358f203170e9f8aea34e2861" Dec 04 06:59:44 crc kubenswrapper[4832]: I1204 06:59:44.124878 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s54vr"] Dec 04 06:59:44 crc kubenswrapper[4832]: I1204 06:59:44.131692 4832 scope.go:117] "RemoveContainer" containerID="04030ac2d392f4e6929094d913baa5f7bae96894eb38b0781d2fd35787e1c055" Dec 04 06:59:44 crc kubenswrapper[4832]: I1204 06:59:44.142806 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s54vr"] Dec 04 06:59:44 crc kubenswrapper[4832]: I1204 06:59:44.728670 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d98fc3f4-6924-4a02-ac99-07871184acdb" path="/var/lib/kubelet/pods/d98fc3f4-6924-4a02-ac99-07871184acdb/volumes" Dec 04 06:59:52 crc kubenswrapper[4832]: I1204 06:59:52.710542 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 06:59:52 crc kubenswrapper[4832]: E1204 06:59:52.711302 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.160918 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z"] Dec 04 07:00:00 crc kubenswrapper[4832]: E1204 07:00:00.161995 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98fc3f4-6924-4a02-ac99-07871184acdb" containerName="extract-content" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.162013 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98fc3f4-6924-4a02-ac99-07871184acdb" containerName="extract-content" Dec 04 07:00:00 crc kubenswrapper[4832]: E1204 07:00:00.162033 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98fc3f4-6924-4a02-ac99-07871184acdb" containerName="extract-utilities" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.162044 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98fc3f4-6924-4a02-ac99-07871184acdb" containerName="extract-utilities" Dec 04 07:00:00 crc kubenswrapper[4832]: E1204 07:00:00.162080 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98fc3f4-6924-4a02-ac99-07871184acdb" containerName="registry-server" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.162086 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98fc3f4-6924-4a02-ac99-07871184acdb" containerName="registry-server" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.162330 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d98fc3f4-6924-4a02-ac99-07871184acdb" containerName="registry-server" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.163294 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.166521 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.166838 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.171237 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z"] Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.289096 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82tmj\" (UniqueName: \"kubernetes.io/projected/6aa18452-7975-48d2-b8ea-22f6fa27460a-kube-api-access-82tmj\") pod \"collect-profiles-29413860-7dr5z\" (UID: \"6aa18452-7975-48d2-b8ea-22f6fa27460a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.289544 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6aa18452-7975-48d2-b8ea-22f6fa27460a-secret-volume\") pod \"collect-profiles-29413860-7dr5z\" (UID: \"6aa18452-7975-48d2-b8ea-22f6fa27460a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.289948 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6aa18452-7975-48d2-b8ea-22f6fa27460a-config-volume\") pod \"collect-profiles-29413860-7dr5z\" (UID: \"6aa18452-7975-48d2-b8ea-22f6fa27460a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.392166 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6aa18452-7975-48d2-b8ea-22f6fa27460a-config-volume\") pod \"collect-profiles-29413860-7dr5z\" (UID: \"6aa18452-7975-48d2-b8ea-22f6fa27460a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.392370 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82tmj\" (UniqueName: \"kubernetes.io/projected/6aa18452-7975-48d2-b8ea-22f6fa27460a-kube-api-access-82tmj\") pod \"collect-profiles-29413860-7dr5z\" (UID: \"6aa18452-7975-48d2-b8ea-22f6fa27460a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.392503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6aa18452-7975-48d2-b8ea-22f6fa27460a-secret-volume\") pod \"collect-profiles-29413860-7dr5z\" (UID: \"6aa18452-7975-48d2-b8ea-22f6fa27460a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.394034 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6aa18452-7975-48d2-b8ea-22f6fa27460a-config-volume\") pod \"collect-profiles-29413860-7dr5z\" (UID: \"6aa18452-7975-48d2-b8ea-22f6fa27460a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.398863 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6aa18452-7975-48d2-b8ea-22f6fa27460a-secret-volume\") pod \"collect-profiles-29413860-7dr5z\" (UID: \"6aa18452-7975-48d2-b8ea-22f6fa27460a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.411203 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82tmj\" (UniqueName: \"kubernetes.io/projected/6aa18452-7975-48d2-b8ea-22f6fa27460a-kube-api-access-82tmj\") pod \"collect-profiles-29413860-7dr5z\" (UID: \"6aa18452-7975-48d2-b8ea-22f6fa27460a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.485058 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:00 crc kubenswrapper[4832]: I1204 07:00:00.985917 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z"] Dec 04 07:00:01 crc kubenswrapper[4832]: I1204 07:00:01.295889 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" event={"ID":"6aa18452-7975-48d2-b8ea-22f6fa27460a","Type":"ContainerStarted","Data":"b7365f4eaa60fccaf733ef662d052c9a05cb4f5e0f1b1c3dacd6b2209ef0bfa8"} Dec 04 07:00:01 crc kubenswrapper[4832]: I1204 07:00:01.296311 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" event={"ID":"6aa18452-7975-48d2-b8ea-22f6fa27460a","Type":"ContainerStarted","Data":"861440a431e9627b40ac8c9e6fa1589ee2141ce55c48995397d5eb7af0c08985"} Dec 04 07:00:02 crc kubenswrapper[4832]: I1204 07:00:02.306520 4832 generic.go:334] "Generic (PLEG): container finished" podID="6aa18452-7975-48d2-b8ea-22f6fa27460a" containerID="b7365f4eaa60fccaf733ef662d052c9a05cb4f5e0f1b1c3dacd6b2209ef0bfa8" exitCode=0 Dec 04 07:00:02 crc kubenswrapper[4832]: I1204 07:00:02.306568 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" event={"ID":"6aa18452-7975-48d2-b8ea-22f6fa27460a","Type":"ContainerDied","Data":"b7365f4eaa60fccaf733ef662d052c9a05cb4f5e0f1b1c3dacd6b2209ef0bfa8"} Dec 04 07:00:03 crc kubenswrapper[4832]: I1204 07:00:03.686927 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:03 crc kubenswrapper[4832]: I1204 07:00:03.770765 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6aa18452-7975-48d2-b8ea-22f6fa27460a-config-volume\") pod \"6aa18452-7975-48d2-b8ea-22f6fa27460a\" (UID: \"6aa18452-7975-48d2-b8ea-22f6fa27460a\") " Dec 04 07:00:03 crc kubenswrapper[4832]: I1204 07:00:03.771142 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82tmj\" (UniqueName: \"kubernetes.io/projected/6aa18452-7975-48d2-b8ea-22f6fa27460a-kube-api-access-82tmj\") pod \"6aa18452-7975-48d2-b8ea-22f6fa27460a\" (UID: \"6aa18452-7975-48d2-b8ea-22f6fa27460a\") " Dec 04 07:00:03 crc kubenswrapper[4832]: I1204 07:00:03.771420 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6aa18452-7975-48d2-b8ea-22f6fa27460a-secret-volume\") pod \"6aa18452-7975-48d2-b8ea-22f6fa27460a\" (UID: \"6aa18452-7975-48d2-b8ea-22f6fa27460a\") " Dec 04 07:00:03 crc kubenswrapper[4832]: I1204 07:00:03.771604 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa18452-7975-48d2-b8ea-22f6fa27460a-config-volume" (OuterVolumeSpecName: "config-volume") pod "6aa18452-7975-48d2-b8ea-22f6fa27460a" (UID: "6aa18452-7975-48d2-b8ea-22f6fa27460a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 07:00:03 crc kubenswrapper[4832]: I1204 07:00:03.772108 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6aa18452-7975-48d2-b8ea-22f6fa27460a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 07:00:03 crc kubenswrapper[4832]: I1204 07:00:03.777037 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa18452-7975-48d2-b8ea-22f6fa27460a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6aa18452-7975-48d2-b8ea-22f6fa27460a" (UID: "6aa18452-7975-48d2-b8ea-22f6fa27460a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 07:00:03 crc kubenswrapper[4832]: I1204 07:00:03.779677 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa18452-7975-48d2-b8ea-22f6fa27460a-kube-api-access-82tmj" (OuterVolumeSpecName: "kube-api-access-82tmj") pod "6aa18452-7975-48d2-b8ea-22f6fa27460a" (UID: "6aa18452-7975-48d2-b8ea-22f6fa27460a"). InnerVolumeSpecName "kube-api-access-82tmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:00:03 crc kubenswrapper[4832]: I1204 07:00:03.874508 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82tmj\" (UniqueName: \"kubernetes.io/projected/6aa18452-7975-48d2-b8ea-22f6fa27460a-kube-api-access-82tmj\") on node \"crc\" DevicePath \"\"" Dec 04 07:00:03 crc kubenswrapper[4832]: I1204 07:00:03.874545 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6aa18452-7975-48d2-b8ea-22f6fa27460a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 07:00:04 crc kubenswrapper[4832]: I1204 07:00:04.326416 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" event={"ID":"6aa18452-7975-48d2-b8ea-22f6fa27460a","Type":"ContainerDied","Data":"861440a431e9627b40ac8c9e6fa1589ee2141ce55c48995397d5eb7af0c08985"} Dec 04 07:00:04 crc kubenswrapper[4832]: I1204 07:00:04.326716 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="861440a431e9627b40ac8c9e6fa1589ee2141ce55c48995397d5eb7af0c08985" Dec 04 07:00:04 crc kubenswrapper[4832]: I1204 07:00:04.326483 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413860-7dr5z" Dec 04 07:00:04 crc kubenswrapper[4832]: I1204 07:00:04.405041 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4"] Dec 04 07:00:04 crc kubenswrapper[4832]: I1204 07:00:04.414884 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413815-xcrv4"] Dec 04 07:00:04 crc kubenswrapper[4832]: I1204 07:00:04.721415 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 07:00:04 crc kubenswrapper[4832]: E1204 07:00:04.721745 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:00:04 crc kubenswrapper[4832]: I1204 07:00:04.724338 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd3393f-7bbf-4a54-a45c-5f206912dd1d" path="/var/lib/kubelet/pods/0bd3393f-7bbf-4a54-a45c-5f206912dd1d/volumes" Dec 04 07:00:17 crc kubenswrapper[4832]: I1204 07:00:17.712523 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 07:00:17 crc kubenswrapper[4832]: E1204 07:00:17.714490 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:00:21 crc kubenswrapper[4832]: I1204 07:00:21.885063 4832 scope.go:117] "RemoveContainer" containerID="c0e7c65b69ea52b65e48bd81b4a8679fa63d8afc0d84346e84d53c95c9dab11e" Dec 04 07:00:32 crc kubenswrapper[4832]: I1204 07:00:32.711209 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 07:00:32 crc kubenswrapper[4832]: E1204 07:00:32.712009 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:00:45 crc kubenswrapper[4832]: I1204 07:00:45.711363 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 07:00:45 crc kubenswrapper[4832]: E1204 07:00:45.712168 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:00:56 crc kubenswrapper[4832]: I1204 07:00:56.710257 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 07:00:56 crc kubenswrapper[4832]: E1204 07:00:56.711297 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.181936 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29413861-44mgd"] Dec 04 07:01:00 crc kubenswrapper[4832]: E1204 07:01:00.182969 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa18452-7975-48d2-b8ea-22f6fa27460a" containerName="collect-profiles" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.182991 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa18452-7975-48d2-b8ea-22f6fa27460a" containerName="collect-profiles" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.183280 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa18452-7975-48d2-b8ea-22f6fa27460a" containerName="collect-profiles" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.184160 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.202593 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29413861-44mgd"] Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.285162 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-config-data\") pod \"keystone-cron-29413861-44mgd\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.285227 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-fernet-keys\") pod \"keystone-cron-29413861-44mgd\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.285256 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-combined-ca-bundle\") pod \"keystone-cron-29413861-44mgd\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.285318 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lhqv\" (UniqueName: \"kubernetes.io/projected/38e7bb37-9dd3-4010-89f6-e49c6d710eab-kube-api-access-4lhqv\") pod \"keystone-cron-29413861-44mgd\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.387645 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lhqv\" (UniqueName: \"kubernetes.io/projected/38e7bb37-9dd3-4010-89f6-e49c6d710eab-kube-api-access-4lhqv\") pod \"keystone-cron-29413861-44mgd\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.387857 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-config-data\") pod \"keystone-cron-29413861-44mgd\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.387888 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-fernet-keys\") pod \"keystone-cron-29413861-44mgd\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.387911 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-combined-ca-bundle\") pod \"keystone-cron-29413861-44mgd\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.396888 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-combined-ca-bundle\") pod \"keystone-cron-29413861-44mgd\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.396918 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-fernet-keys\") pod \"keystone-cron-29413861-44mgd\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.398106 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-config-data\") pod \"keystone-cron-29413861-44mgd\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.421325 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lhqv\" (UniqueName: \"kubernetes.io/projected/38e7bb37-9dd3-4010-89f6-e49c6d710eab-kube-api-access-4lhqv\") pod \"keystone-cron-29413861-44mgd\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.510510 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:00 crc kubenswrapper[4832]: I1204 07:01:00.978438 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29413861-44mgd"] Dec 04 07:01:00 crc kubenswrapper[4832]: W1204 07:01:00.984630 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38e7bb37_9dd3_4010_89f6_e49c6d710eab.slice/crio-3e7262f73d073c245f5befcfa8fef2a0f7e9b7e5040ca3a55b9960586acc0b54 WatchSource:0}: Error finding container 3e7262f73d073c245f5befcfa8fef2a0f7e9b7e5040ca3a55b9960586acc0b54: Status 404 returned error can't find the container with id 3e7262f73d073c245f5befcfa8fef2a0f7e9b7e5040ca3a55b9960586acc0b54 Dec 04 07:01:01 crc kubenswrapper[4832]: I1204 07:01:01.940448 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29413861-44mgd" event={"ID":"38e7bb37-9dd3-4010-89f6-e49c6d710eab","Type":"ContainerStarted","Data":"60cc5aaa5ab650a935f97c578e6d1a13a88fe615b58869e166025a39f7af6b8e"} Dec 04 07:01:01 crc kubenswrapper[4832]: I1204 07:01:01.941120 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29413861-44mgd" event={"ID":"38e7bb37-9dd3-4010-89f6-e49c6d710eab","Type":"ContainerStarted","Data":"3e7262f73d073c245f5befcfa8fef2a0f7e9b7e5040ca3a55b9960586acc0b54"} Dec 04 07:01:01 crc kubenswrapper[4832]: I1204 07:01:01.983887 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29413861-44mgd" podStartSLOduration=1.9838580430000001 podStartE2EDuration="1.983858043s" podCreationTimestamp="2025-12-04 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 07:01:01.973977751 +0000 UTC m=+3117.586795457" watchObservedRunningTime="2025-12-04 07:01:01.983858043 +0000 UTC m=+3117.596675749" Dec 04 07:01:03 crc kubenswrapper[4832]: I1204 07:01:03.978073 4832 generic.go:334] "Generic (PLEG): container finished" podID="38e7bb37-9dd3-4010-89f6-e49c6d710eab" containerID="60cc5aaa5ab650a935f97c578e6d1a13a88fe615b58869e166025a39f7af6b8e" exitCode=0 Dec 04 07:01:03 crc kubenswrapper[4832]: I1204 07:01:03.978189 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29413861-44mgd" event={"ID":"38e7bb37-9dd3-4010-89f6-e49c6d710eab","Type":"ContainerDied","Data":"60cc5aaa5ab650a935f97c578e6d1a13a88fe615b58869e166025a39f7af6b8e"} Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.459254 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.527187 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-fernet-keys\") pod \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.527483 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lhqv\" (UniqueName: \"kubernetes.io/projected/38e7bb37-9dd3-4010-89f6-e49c6d710eab-kube-api-access-4lhqv\") pod \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.527526 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-combined-ca-bundle\") pod \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.527610 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-config-data\") pod \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\" (UID: \"38e7bb37-9dd3-4010-89f6-e49c6d710eab\") " Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.539520 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e7bb37-9dd3-4010-89f6-e49c6d710eab-kube-api-access-4lhqv" (OuterVolumeSpecName: "kube-api-access-4lhqv") pod "38e7bb37-9dd3-4010-89f6-e49c6d710eab" (UID: "38e7bb37-9dd3-4010-89f6-e49c6d710eab"). InnerVolumeSpecName "kube-api-access-4lhqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.540295 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "38e7bb37-9dd3-4010-89f6-e49c6d710eab" (UID: "38e7bb37-9dd3-4010-89f6-e49c6d710eab"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.580573 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38e7bb37-9dd3-4010-89f6-e49c6d710eab" (UID: "38e7bb37-9dd3-4010-89f6-e49c6d710eab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.613518 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-config-data" (OuterVolumeSpecName: "config-data") pod "38e7bb37-9dd3-4010-89f6-e49c6d710eab" (UID: "38e7bb37-9dd3-4010-89f6-e49c6d710eab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.630648 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lhqv\" (UniqueName: \"kubernetes.io/projected/38e7bb37-9dd3-4010-89f6-e49c6d710eab-kube-api-access-4lhqv\") on node \"crc\" DevicePath \"\"" Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.631037 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.631047 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 07:01:05 crc kubenswrapper[4832]: I1204 07:01:05.631056 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38e7bb37-9dd3-4010-89f6-e49c6d710eab-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 04 07:01:06 crc kubenswrapper[4832]: I1204 07:01:06.011193 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29413861-44mgd" event={"ID":"38e7bb37-9dd3-4010-89f6-e49c6d710eab","Type":"ContainerDied","Data":"3e7262f73d073c245f5befcfa8fef2a0f7e9b7e5040ca3a55b9960586acc0b54"} Dec 04 07:01:06 crc kubenswrapper[4832]: I1204 07:01:06.011246 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e7262f73d073c245f5befcfa8fef2a0f7e9b7e5040ca3a55b9960586acc0b54" Dec 04 07:01:06 crc kubenswrapper[4832]: I1204 07:01:06.011309 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29413861-44mgd" Dec 04 07:01:10 crc kubenswrapper[4832]: I1204 07:01:10.711008 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 07:01:10 crc kubenswrapper[4832]: E1204 07:01:10.712007 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:01:21 crc kubenswrapper[4832]: I1204 07:01:21.710980 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 07:01:21 crc kubenswrapper[4832]: E1204 07:01:21.711768 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:01:33 crc kubenswrapper[4832]: I1204 07:01:33.710997 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 07:01:33 crc kubenswrapper[4832]: E1204 07:01:33.711900 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:01:48 crc kubenswrapper[4832]: I1204 07:01:48.711599 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 07:01:49 crc kubenswrapper[4832]: I1204 07:01:49.527986 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"856a93375e1f4eb567291b7ae816d28801aab20bb53d2043d8e3e06041624af8"} Dec 04 07:04:05 crc kubenswrapper[4832]: I1204 07:04:05.362762 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 07:04:05 crc kubenswrapper[4832]: I1204 07:04:05.363675 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 07:04:35 crc kubenswrapper[4832]: I1204 07:04:35.362877 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 07:04:35 crc kubenswrapper[4832]: I1204 07:04:35.364220 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 07:05:05 crc kubenswrapper[4832]: I1204 07:05:05.362664 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 07:05:05 crc kubenswrapper[4832]: I1204 07:05:05.363270 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 07:05:05 crc kubenswrapper[4832]: I1204 07:05:05.363327 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 07:05:05 crc kubenswrapper[4832]: I1204 07:05:05.364365 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"856a93375e1f4eb567291b7ae816d28801aab20bb53d2043d8e3e06041624af8"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 07:05:05 crc kubenswrapper[4832]: I1204 07:05:05.364448 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://856a93375e1f4eb567291b7ae816d28801aab20bb53d2043d8e3e06041624af8" gracePeriod=600 Dec 04 07:05:05 crc kubenswrapper[4832]: I1204 07:05:05.684879 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="856a93375e1f4eb567291b7ae816d28801aab20bb53d2043d8e3e06041624af8" exitCode=0 Dec 04 07:05:05 crc kubenswrapper[4832]: I1204 07:05:05.685567 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"856a93375e1f4eb567291b7ae816d28801aab20bb53d2043d8e3e06041624af8"} Dec 04 07:05:05 crc kubenswrapper[4832]: I1204 07:05:05.685820 4832 scope.go:117] "RemoveContainer" containerID="a023d153974515928150f5bf4ab0643e394133c2aa532ab975730fa13e49471e" Dec 04 07:05:06 crc kubenswrapper[4832]: I1204 07:05:06.695758 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac"} Dec 04 07:06:02 crc kubenswrapper[4832]: I1204 07:06:02.295494 4832 generic.go:334] "Generic (PLEG): container finished" podID="068b63a2-ea9f-4022-8a42-8d345222f5a7" containerID="6a1f9bc50e66eddce4a083aec14ade328e00a08bb888af6b70555e57df626be3" exitCode=0 Dec 04 07:06:02 crc kubenswrapper[4832]: I1204 07:06:02.295579 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"068b63a2-ea9f-4022-8a42-8d345222f5a7","Type":"ContainerDied","Data":"6a1f9bc50e66eddce4a083aec14ade328e00a08bb888af6b70555e57df626be3"} Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.701835 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.844482 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/068b63a2-ea9f-4022-8a42-8d345222f5a7-test-operator-ephemeral-workdir\") pod \"068b63a2-ea9f-4022-8a42-8d345222f5a7\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.844537 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4jlq\" (UniqueName: \"kubernetes.io/projected/068b63a2-ea9f-4022-8a42-8d345222f5a7-kube-api-access-r4jlq\") pod \"068b63a2-ea9f-4022-8a42-8d345222f5a7\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.844573 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-ssh-key\") pod \"068b63a2-ea9f-4022-8a42-8d345222f5a7\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.844657 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-openstack-config-secret\") pod \"068b63a2-ea9f-4022-8a42-8d345222f5a7\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.844714 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/068b63a2-ea9f-4022-8a42-8d345222f5a7-test-operator-ephemeral-temporary\") pod \"068b63a2-ea9f-4022-8a42-8d345222f5a7\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.845648 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/068b63a2-ea9f-4022-8a42-8d345222f5a7-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "068b63a2-ea9f-4022-8a42-8d345222f5a7" (UID: "068b63a2-ea9f-4022-8a42-8d345222f5a7"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.845850 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"068b63a2-ea9f-4022-8a42-8d345222f5a7\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.846024 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/068b63a2-ea9f-4022-8a42-8d345222f5a7-config-data\") pod \"068b63a2-ea9f-4022-8a42-8d345222f5a7\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.847624 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068b63a2-ea9f-4022-8a42-8d345222f5a7-config-data" (OuterVolumeSpecName: "config-data") pod "068b63a2-ea9f-4022-8a42-8d345222f5a7" (UID: "068b63a2-ea9f-4022-8a42-8d345222f5a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.847824 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/068b63a2-ea9f-4022-8a42-8d345222f5a7-openstack-config\") pod \"068b63a2-ea9f-4022-8a42-8d345222f5a7\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.848605 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-ca-certs\") pod \"068b63a2-ea9f-4022-8a42-8d345222f5a7\" (UID: \"068b63a2-ea9f-4022-8a42-8d345222f5a7\") " Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.851126 4832 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/068b63a2-ea9f-4022-8a42-8d345222f5a7-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.851172 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/068b63a2-ea9f-4022-8a42-8d345222f5a7-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.851593 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "068b63a2-ea9f-4022-8a42-8d345222f5a7" (UID: "068b63a2-ea9f-4022-8a42-8d345222f5a7"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.852338 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/068b63a2-ea9f-4022-8a42-8d345222f5a7-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "068b63a2-ea9f-4022-8a42-8d345222f5a7" (UID: "068b63a2-ea9f-4022-8a42-8d345222f5a7"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.858486 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068b63a2-ea9f-4022-8a42-8d345222f5a7-kube-api-access-r4jlq" (OuterVolumeSpecName: "kube-api-access-r4jlq") pod "068b63a2-ea9f-4022-8a42-8d345222f5a7" (UID: "068b63a2-ea9f-4022-8a42-8d345222f5a7"). InnerVolumeSpecName "kube-api-access-r4jlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.879973 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "068b63a2-ea9f-4022-8a42-8d345222f5a7" (UID: "068b63a2-ea9f-4022-8a42-8d345222f5a7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.880708 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "068b63a2-ea9f-4022-8a42-8d345222f5a7" (UID: "068b63a2-ea9f-4022-8a42-8d345222f5a7"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.882605 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "068b63a2-ea9f-4022-8a42-8d345222f5a7" (UID: "068b63a2-ea9f-4022-8a42-8d345222f5a7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.913606 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068b63a2-ea9f-4022-8a42-8d345222f5a7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "068b63a2-ea9f-4022-8a42-8d345222f5a7" (UID: "068b63a2-ea9f-4022-8a42-8d345222f5a7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.953087 4832 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/068b63a2-ea9f-4022-8a42-8d345222f5a7-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.953141 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4jlq\" (UniqueName: \"kubernetes.io/projected/068b63a2-ea9f-4022-8a42-8d345222f5a7-kube-api-access-r4jlq\") on node \"crc\" DevicePath \"\"" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.953158 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.953172 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.953221 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.953235 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/068b63a2-ea9f-4022-8a42-8d345222f5a7-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.953247 4832 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/068b63a2-ea9f-4022-8a42-8d345222f5a7-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 04 07:06:03 crc kubenswrapper[4832]: I1204 07:06:03.975638 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 04 07:06:04 crc kubenswrapper[4832]: I1204 07:06:04.055280 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 04 07:06:04 crc kubenswrapper[4832]: I1204 07:06:04.317833 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"068b63a2-ea9f-4022-8a42-8d345222f5a7","Type":"ContainerDied","Data":"1f3a46cf793436f9249a0dab9a3a4fb180cf5c6c8f54e9749c51124ae41decf1"} Dec 04 07:06:04 crc kubenswrapper[4832]: I1204 07:06:04.317876 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f3a46cf793436f9249a0dab9a3a4fb180cf5c6c8f54e9749c51124ae41decf1" Dec 04 07:06:04 crc kubenswrapper[4832]: I1204 07:06:04.317887 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.276596 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 04 07:06:16 crc kubenswrapper[4832]: E1204 07:06:16.277669 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068b63a2-ea9f-4022-8a42-8d345222f5a7" containerName="tempest-tests-tempest-tests-runner" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.277684 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="068b63a2-ea9f-4022-8a42-8d345222f5a7" containerName="tempest-tests-tempest-tests-runner" Dec 04 07:06:16 crc kubenswrapper[4832]: E1204 07:06:16.277695 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e7bb37-9dd3-4010-89f6-e49c6d710eab" containerName="keystone-cron" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.277702 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e7bb37-9dd3-4010-89f6-e49c6d710eab" containerName="keystone-cron" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.277937 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e7bb37-9dd3-4010-89f6-e49c6d710eab" containerName="keystone-cron" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.277965 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="068b63a2-ea9f-4022-8a42-8d345222f5a7" containerName="tempest-tests-tempest-tests-runner" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.278749 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.281951 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-29dvv" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.285889 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.328628 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b6d3a092-b799-497b-9ca7-10f0578b0f7b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.328691 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hckd2\" (UniqueName: \"kubernetes.io/projected/b6d3a092-b799-497b-9ca7-10f0578b0f7b-kube-api-access-hckd2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b6d3a092-b799-497b-9ca7-10f0578b0f7b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.430424 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b6d3a092-b799-497b-9ca7-10f0578b0f7b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.430503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hckd2\" (UniqueName: \"kubernetes.io/projected/b6d3a092-b799-497b-9ca7-10f0578b0f7b-kube-api-access-hckd2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b6d3a092-b799-497b-9ca7-10f0578b0f7b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.431626 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b6d3a092-b799-497b-9ca7-10f0578b0f7b\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.451256 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hckd2\" (UniqueName: \"kubernetes.io/projected/b6d3a092-b799-497b-9ca7-10f0578b0f7b-kube-api-access-hckd2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b6d3a092-b799-497b-9ca7-10f0578b0f7b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.459287 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b6d3a092-b799-497b-9ca7-10f0578b0f7b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.604704 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.873364 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 04 07:06:16 crc kubenswrapper[4832]: I1204 07:06:16.876719 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 07:06:17 crc kubenswrapper[4832]: I1204 07:06:17.438814 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b6d3a092-b799-497b-9ca7-10f0578b0f7b","Type":"ContainerStarted","Data":"5c65668d8cd8d2abbee5f76509f5b6a45ef2fca5d6204050baf6f82801280404"} Dec 04 07:06:19 crc kubenswrapper[4832]: I1204 07:06:19.458985 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b6d3a092-b799-497b-9ca7-10f0578b0f7b","Type":"ContainerStarted","Data":"3b67cb221c1cd88353381fcc1a0a96bb19ad5df1f83883a680c76e3c76d9d3cb"} Dec 04 07:06:19 crc kubenswrapper[4832]: I1204 07:06:19.483223 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.840488415 podStartE2EDuration="3.483197258s" podCreationTimestamp="2025-12-04 07:06:16 +0000 UTC" firstStartedPulling="2025-12-04 07:06:16.876329115 +0000 UTC m=+3432.489146821" lastFinishedPulling="2025-12-04 07:06:18.519037958 +0000 UTC m=+3434.131855664" observedRunningTime="2025-12-04 07:06:19.473638203 +0000 UTC m=+3435.086455919" watchObservedRunningTime="2025-12-04 07:06:19.483197258 +0000 UTC m=+3435.096014974" Dec 04 07:06:42 crc kubenswrapper[4832]: I1204 07:06:42.598439 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qv5m8/must-gather-2hc97"] Dec 04 07:06:42 crc kubenswrapper[4832]: I1204 07:06:42.601249 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/must-gather-2hc97" Dec 04 07:06:42 crc kubenswrapper[4832]: I1204 07:06:42.605380 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qv5m8"/"openshift-service-ca.crt" Dec 04 07:06:42 crc kubenswrapper[4832]: I1204 07:06:42.605839 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qv5m8"/"kube-root-ca.crt" Dec 04 07:06:42 crc kubenswrapper[4832]: I1204 07:06:42.635109 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qv5m8/must-gather-2hc97"] Dec 04 07:06:42 crc kubenswrapper[4832]: I1204 07:06:42.789872 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21e66927-9ab9-4f94-9843-02b60fb8041a-must-gather-output\") pod \"must-gather-2hc97\" (UID: \"21e66927-9ab9-4f94-9843-02b60fb8041a\") " pod="openshift-must-gather-qv5m8/must-gather-2hc97" Dec 04 07:06:42 crc kubenswrapper[4832]: I1204 07:06:42.789924 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hv78\" (UniqueName: \"kubernetes.io/projected/21e66927-9ab9-4f94-9843-02b60fb8041a-kube-api-access-4hv78\") pod \"must-gather-2hc97\" (UID: \"21e66927-9ab9-4f94-9843-02b60fb8041a\") " pod="openshift-must-gather-qv5m8/must-gather-2hc97" Dec 04 07:06:42 crc kubenswrapper[4832]: I1204 07:06:42.891443 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21e66927-9ab9-4f94-9843-02b60fb8041a-must-gather-output\") pod \"must-gather-2hc97\" (UID: \"21e66927-9ab9-4f94-9843-02b60fb8041a\") " pod="openshift-must-gather-qv5m8/must-gather-2hc97" Dec 04 07:06:42 crc kubenswrapper[4832]: I1204 07:06:42.891487 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hv78\" (UniqueName: \"kubernetes.io/projected/21e66927-9ab9-4f94-9843-02b60fb8041a-kube-api-access-4hv78\") pod \"must-gather-2hc97\" (UID: \"21e66927-9ab9-4f94-9843-02b60fb8041a\") " pod="openshift-must-gather-qv5m8/must-gather-2hc97" Dec 04 07:06:42 crc kubenswrapper[4832]: I1204 07:06:42.892062 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21e66927-9ab9-4f94-9843-02b60fb8041a-must-gather-output\") pod \"must-gather-2hc97\" (UID: \"21e66927-9ab9-4f94-9843-02b60fb8041a\") " pod="openshift-must-gather-qv5m8/must-gather-2hc97" Dec 04 07:06:42 crc kubenswrapper[4832]: I1204 07:06:42.914273 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hv78\" (UniqueName: \"kubernetes.io/projected/21e66927-9ab9-4f94-9843-02b60fb8041a-kube-api-access-4hv78\") pod \"must-gather-2hc97\" (UID: \"21e66927-9ab9-4f94-9843-02b60fb8041a\") " pod="openshift-must-gather-qv5m8/must-gather-2hc97" Dec 04 07:06:42 crc kubenswrapper[4832]: I1204 07:06:42.928445 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/must-gather-2hc97" Dec 04 07:06:43 crc kubenswrapper[4832]: I1204 07:06:43.488588 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qv5m8/must-gather-2hc97"] Dec 04 07:06:43 crc kubenswrapper[4832]: I1204 07:06:43.727092 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qv5m8/must-gather-2hc97" event={"ID":"21e66927-9ab9-4f94-9843-02b60fb8041a","Type":"ContainerStarted","Data":"be3dbe9ea4fddcda9186d7b32442fe84d667002bc8836649912a99b86bb3d2c3"} Dec 04 07:06:47 crc kubenswrapper[4832]: I1204 07:06:47.774732 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qv5m8/must-gather-2hc97" event={"ID":"21e66927-9ab9-4f94-9843-02b60fb8041a","Type":"ContainerStarted","Data":"2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4"} Dec 04 07:06:48 crc kubenswrapper[4832]: I1204 07:06:48.800645 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qv5m8/must-gather-2hc97" event={"ID":"21e66927-9ab9-4f94-9843-02b60fb8041a","Type":"ContainerStarted","Data":"cd25f17930265752279e18f55645ad0911cd362eaf877a276cae768dcb467265"} Dec 04 07:06:48 crc kubenswrapper[4832]: I1204 07:06:48.824331 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qv5m8/must-gather-2hc97" podStartSLOduration=2.8316711789999998 podStartE2EDuration="6.824307953s" podCreationTimestamp="2025-12-04 07:06:42 +0000 UTC" firstStartedPulling="2025-12-04 07:06:43.502128807 +0000 UTC m=+3459.114946513" lastFinishedPulling="2025-12-04 07:06:47.494765571 +0000 UTC m=+3463.107583287" observedRunningTime="2025-12-04 07:06:48.817094996 +0000 UTC m=+3464.429912712" watchObservedRunningTime="2025-12-04 07:06:48.824307953 +0000 UTC m=+3464.437125659" Dec 04 07:06:50 crc kubenswrapper[4832]: E1204 07:06:50.770691 4832 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.107:57848->38.102.83.107:37339: read tcp 38.102.83.107:57848->38.102.83.107:37339: read: connection reset by peer Dec 04 07:06:51 crc kubenswrapper[4832]: E1204 07:06:51.048010 4832 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.107:57878->38.102.83.107:37339: write tcp 38.102.83.107:57878->38.102.83.107:37339: write: broken pipe Dec 04 07:06:51 crc kubenswrapper[4832]: I1204 07:06:51.506787 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qv5m8/crc-debug-dcdkw"] Dec 04 07:06:51 crc kubenswrapper[4832]: I1204 07:06:51.508146 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" Dec 04 07:06:51 crc kubenswrapper[4832]: I1204 07:06:51.512439 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qv5m8"/"default-dockercfg-mwhqm" Dec 04 07:06:51 crc kubenswrapper[4832]: I1204 07:06:51.618509 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5043b95f-b635-4a68-ade8-ff4b40c18a86-host\") pod \"crc-debug-dcdkw\" (UID: \"5043b95f-b635-4a68-ade8-ff4b40c18a86\") " pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" Dec 04 07:06:51 crc kubenswrapper[4832]: I1204 07:06:51.618591 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58nmg\" (UniqueName: \"kubernetes.io/projected/5043b95f-b635-4a68-ade8-ff4b40c18a86-kube-api-access-58nmg\") pod \"crc-debug-dcdkw\" (UID: \"5043b95f-b635-4a68-ade8-ff4b40c18a86\") " pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" Dec 04 07:06:51 crc kubenswrapper[4832]: I1204 07:06:51.720442 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5043b95f-b635-4a68-ade8-ff4b40c18a86-host\") pod \"crc-debug-dcdkw\" (UID: \"5043b95f-b635-4a68-ade8-ff4b40c18a86\") " pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" Dec 04 07:06:51 crc kubenswrapper[4832]: I1204 07:06:51.720568 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5043b95f-b635-4a68-ade8-ff4b40c18a86-host\") pod \"crc-debug-dcdkw\" (UID: \"5043b95f-b635-4a68-ade8-ff4b40c18a86\") " pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" Dec 04 07:06:51 crc kubenswrapper[4832]: I1204 07:06:51.720680 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58nmg\" (UniqueName: \"kubernetes.io/projected/5043b95f-b635-4a68-ade8-ff4b40c18a86-kube-api-access-58nmg\") pod \"crc-debug-dcdkw\" (UID: \"5043b95f-b635-4a68-ade8-ff4b40c18a86\") " pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" Dec 04 07:06:51 crc kubenswrapper[4832]: I1204 07:06:51.746550 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58nmg\" (UniqueName: \"kubernetes.io/projected/5043b95f-b635-4a68-ade8-ff4b40c18a86-kube-api-access-58nmg\") pod \"crc-debug-dcdkw\" (UID: \"5043b95f-b635-4a68-ade8-ff4b40c18a86\") " pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" Dec 04 07:06:51 crc kubenswrapper[4832]: I1204 07:06:51.827692 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" Dec 04 07:06:51 crc kubenswrapper[4832]: W1204 07:06:51.874184 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5043b95f_b635_4a68_ade8_ff4b40c18a86.slice/crio-2d73bc03a71ec8066f293f53866d9dcb904386e64c55b2165e0fdf71a5b4b101 WatchSource:0}: Error finding container 2d73bc03a71ec8066f293f53866d9dcb904386e64c55b2165e0fdf71a5b4b101: Status 404 returned error can't find the container with id 2d73bc03a71ec8066f293f53866d9dcb904386e64c55b2165e0fdf71a5b4b101 Dec 04 07:06:52 crc kubenswrapper[4832]: I1204 07:06:52.845198 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" event={"ID":"5043b95f-b635-4a68-ade8-ff4b40c18a86","Type":"ContainerStarted","Data":"2d73bc03a71ec8066f293f53866d9dcb904386e64c55b2165e0fdf71a5b4b101"} Dec 04 07:07:03 crc kubenswrapper[4832]: I1204 07:07:03.982747 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" event={"ID":"5043b95f-b635-4a68-ade8-ff4b40c18a86","Type":"ContainerStarted","Data":"352d55af81fdb00b6dd6decc96e5ff9c842130939a99b19149206995147d2a24"} Dec 04 07:07:04 crc kubenswrapper[4832]: I1204 07:07:04.001266 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" podStartSLOduration=1.128387497 podStartE2EDuration="13.001240801s" podCreationTimestamp="2025-12-04 07:06:51 +0000 UTC" firstStartedPulling="2025-12-04 07:06:51.876983141 +0000 UTC m=+3467.489800847" lastFinishedPulling="2025-12-04 07:07:03.749836445 +0000 UTC m=+3479.362654151" observedRunningTime="2025-12-04 07:07:03.997217353 +0000 UTC m=+3479.610035059" watchObservedRunningTime="2025-12-04 07:07:04.001240801 +0000 UTC m=+3479.614058507" Dec 04 07:07:05 crc kubenswrapper[4832]: I1204 07:07:05.362772 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 07:07:05 crc kubenswrapper[4832]: I1204 07:07:05.363470 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.634562 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nb2d2"] Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.655954 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nb2d2"] Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.656109 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.668828 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8feec375-35a8-40d6-b2dc-07a42bc511e9-utilities\") pod \"community-operators-nb2d2\" (UID: \"8feec375-35a8-40d6-b2dc-07a42bc511e9\") " pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.668902 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8feec375-35a8-40d6-b2dc-07a42bc511e9-catalog-content\") pod \"community-operators-nb2d2\" (UID: \"8feec375-35a8-40d6-b2dc-07a42bc511e9\") " pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.669062 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76fm\" (UniqueName: \"kubernetes.io/projected/8feec375-35a8-40d6-b2dc-07a42bc511e9-kube-api-access-k76fm\") pod \"community-operators-nb2d2\" (UID: \"8feec375-35a8-40d6-b2dc-07a42bc511e9\") " pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.772129 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k76fm\" (UniqueName: \"kubernetes.io/projected/8feec375-35a8-40d6-b2dc-07a42bc511e9-kube-api-access-k76fm\") pod \"community-operators-nb2d2\" (UID: \"8feec375-35a8-40d6-b2dc-07a42bc511e9\") " pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.772249 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8feec375-35a8-40d6-b2dc-07a42bc511e9-utilities\") pod \"community-operators-nb2d2\" (UID: \"8feec375-35a8-40d6-b2dc-07a42bc511e9\") " pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.772290 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8feec375-35a8-40d6-b2dc-07a42bc511e9-catalog-content\") pod \"community-operators-nb2d2\" (UID: \"8feec375-35a8-40d6-b2dc-07a42bc511e9\") " pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.772838 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8feec375-35a8-40d6-b2dc-07a42bc511e9-catalog-content\") pod \"community-operators-nb2d2\" (UID: \"8feec375-35a8-40d6-b2dc-07a42bc511e9\") " pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.772876 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8feec375-35a8-40d6-b2dc-07a42bc511e9-utilities\") pod \"community-operators-nb2d2\" (UID: \"8feec375-35a8-40d6-b2dc-07a42bc511e9\") " pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.803534 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k76fm\" (UniqueName: \"kubernetes.io/projected/8feec375-35a8-40d6-b2dc-07a42bc511e9-kube-api-access-k76fm\") pod \"community-operators-nb2d2\" (UID: \"8feec375-35a8-40d6-b2dc-07a42bc511e9\") " pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:30 crc kubenswrapper[4832]: I1204 07:07:30.983567 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:31 crc kubenswrapper[4832]: I1204 07:07:31.613918 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nb2d2"] Dec 04 07:07:32 crc kubenswrapper[4832]: I1204 07:07:32.288879 4832 generic.go:334] "Generic (PLEG): container finished" podID="8feec375-35a8-40d6-b2dc-07a42bc511e9" containerID="5e24e22169db891af1501611f4cc5750e15aaf1d74543bb6595c4c0ca9421164" exitCode=0 Dec 04 07:07:32 crc kubenswrapper[4832]: I1204 07:07:32.289013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb2d2" event={"ID":"8feec375-35a8-40d6-b2dc-07a42bc511e9","Type":"ContainerDied","Data":"5e24e22169db891af1501611f4cc5750e15aaf1d74543bb6595c4c0ca9421164"} Dec 04 07:07:32 crc kubenswrapper[4832]: I1204 07:07:32.289309 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb2d2" event={"ID":"8feec375-35a8-40d6-b2dc-07a42bc511e9","Type":"ContainerStarted","Data":"ee1325254574a426a31f86ff71817fbfe260d367e1118375fd8a718b7da58819"} Dec 04 07:07:33 crc kubenswrapper[4832]: I1204 07:07:33.303602 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb2d2" event={"ID":"8feec375-35a8-40d6-b2dc-07a42bc511e9","Type":"ContainerStarted","Data":"eb307ffd90cb74ba2588c54ea27063cb99e38eb64852738a662cf218ff11df17"} Dec 04 07:07:34 crc kubenswrapper[4832]: I1204 07:07:34.326690 4832 generic.go:334] "Generic (PLEG): container finished" podID="8feec375-35a8-40d6-b2dc-07a42bc511e9" containerID="eb307ffd90cb74ba2588c54ea27063cb99e38eb64852738a662cf218ff11df17" exitCode=0 Dec 04 07:07:34 crc kubenswrapper[4832]: I1204 07:07:34.327258 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb2d2" event={"ID":"8feec375-35a8-40d6-b2dc-07a42bc511e9","Type":"ContainerDied","Data":"eb307ffd90cb74ba2588c54ea27063cb99e38eb64852738a662cf218ff11df17"} Dec 04 07:07:35 crc kubenswrapper[4832]: I1204 07:07:35.338360 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb2d2" event={"ID":"8feec375-35a8-40d6-b2dc-07a42bc511e9","Type":"ContainerStarted","Data":"61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681"} Dec 04 07:07:35 crc kubenswrapper[4832]: I1204 07:07:35.370173 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 07:07:35 crc kubenswrapper[4832]: I1204 07:07:35.370248 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 07:07:40 crc kubenswrapper[4832]: I1204 07:07:40.985140 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:40 crc kubenswrapper[4832]: I1204 07:07:40.985723 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:41 crc kubenswrapper[4832]: I1204 07:07:41.058509 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:41 crc kubenswrapper[4832]: I1204 07:07:41.080540 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nb2d2" podStartSLOduration=8.484981259 podStartE2EDuration="11.080517332s" podCreationTimestamp="2025-12-04 07:07:30 +0000 UTC" firstStartedPulling="2025-12-04 07:07:32.291671436 +0000 UTC m=+3507.904489142" lastFinishedPulling="2025-12-04 07:07:34.887207509 +0000 UTC m=+3510.500025215" observedRunningTime="2025-12-04 07:07:35.391183661 +0000 UTC m=+3511.004001367" watchObservedRunningTime="2025-12-04 07:07:41.080517332 +0000 UTC m=+3516.693335038" Dec 04 07:07:41 crc kubenswrapper[4832]: I1204 07:07:41.469577 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:41 crc kubenswrapper[4832]: I1204 07:07:41.529568 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nb2d2"] Dec 04 07:07:43 crc kubenswrapper[4832]: I1204 07:07:43.431815 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nb2d2" podUID="8feec375-35a8-40d6-b2dc-07a42bc511e9" containerName="registry-server" containerID="cri-o://61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681" gracePeriod=2 Dec 04 07:07:43 crc kubenswrapper[4832]: I1204 07:07:43.932726 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:43 crc kubenswrapper[4832]: I1204 07:07:43.980019 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k76fm\" (UniqueName: \"kubernetes.io/projected/8feec375-35a8-40d6-b2dc-07a42bc511e9-kube-api-access-k76fm\") pod \"8feec375-35a8-40d6-b2dc-07a42bc511e9\" (UID: \"8feec375-35a8-40d6-b2dc-07a42bc511e9\") " Dec 04 07:07:43 crc kubenswrapper[4832]: I1204 07:07:43.980422 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8feec375-35a8-40d6-b2dc-07a42bc511e9-utilities\") pod \"8feec375-35a8-40d6-b2dc-07a42bc511e9\" (UID: \"8feec375-35a8-40d6-b2dc-07a42bc511e9\") " Dec 04 07:07:43 crc kubenswrapper[4832]: I1204 07:07:43.980456 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8feec375-35a8-40d6-b2dc-07a42bc511e9-catalog-content\") pod \"8feec375-35a8-40d6-b2dc-07a42bc511e9\" (UID: \"8feec375-35a8-40d6-b2dc-07a42bc511e9\") " Dec 04 07:07:43 crc kubenswrapper[4832]: I1204 07:07:43.981374 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8feec375-35a8-40d6-b2dc-07a42bc511e9-utilities" (OuterVolumeSpecName: "utilities") pod "8feec375-35a8-40d6-b2dc-07a42bc511e9" (UID: "8feec375-35a8-40d6-b2dc-07a42bc511e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.001939 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8feec375-35a8-40d6-b2dc-07a42bc511e9-kube-api-access-k76fm" (OuterVolumeSpecName: "kube-api-access-k76fm") pod "8feec375-35a8-40d6-b2dc-07a42bc511e9" (UID: "8feec375-35a8-40d6-b2dc-07a42bc511e9"). InnerVolumeSpecName "kube-api-access-k76fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.043060 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8feec375-35a8-40d6-b2dc-07a42bc511e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8feec375-35a8-40d6-b2dc-07a42bc511e9" (UID: "8feec375-35a8-40d6-b2dc-07a42bc511e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.082570 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8feec375-35a8-40d6-b2dc-07a42bc511e9-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.082626 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8feec375-35a8-40d6-b2dc-07a42bc511e9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.082644 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k76fm\" (UniqueName: \"kubernetes.io/projected/8feec375-35a8-40d6-b2dc-07a42bc511e9-kube-api-access-k76fm\") on node \"crc\" DevicePath \"\"" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.444366 4832 generic.go:334] "Generic (PLEG): container finished" podID="8feec375-35a8-40d6-b2dc-07a42bc511e9" containerID="61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681" exitCode=0 Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.444444 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb2d2" event={"ID":"8feec375-35a8-40d6-b2dc-07a42bc511e9","Type":"ContainerDied","Data":"61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681"} Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.444474 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb2d2" event={"ID":"8feec375-35a8-40d6-b2dc-07a42bc511e9","Type":"ContainerDied","Data":"ee1325254574a426a31f86ff71817fbfe260d367e1118375fd8a718b7da58819"} Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.444499 4832 scope.go:117] "RemoveContainer" containerID="61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.444683 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb2d2" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.466351 4832 scope.go:117] "RemoveContainer" containerID="eb307ffd90cb74ba2588c54ea27063cb99e38eb64852738a662cf218ff11df17" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.489889 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nb2d2"] Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.499938 4832 scope.go:117] "RemoveContainer" containerID="5e24e22169db891af1501611f4cc5750e15aaf1d74543bb6595c4c0ca9421164" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.500123 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nb2d2"] Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.548358 4832 scope.go:117] "RemoveContainer" containerID="61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681" Dec 04 07:07:44 crc kubenswrapper[4832]: E1204 07:07:44.551971 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681\": container with ID starting with 61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681 not found: ID does not exist" containerID="61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.552032 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681"} err="failed to get container status \"61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681\": rpc error: code = NotFound desc = could not find container \"61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681\": container with ID starting with 61c2969fb2e0cdc85cbd9d899368c21571ead0f772e4a55be206822f07ef1681 not found: ID does not exist" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.552089 4832 scope.go:117] "RemoveContainer" containerID="eb307ffd90cb74ba2588c54ea27063cb99e38eb64852738a662cf218ff11df17" Dec 04 07:07:44 crc kubenswrapper[4832]: E1204 07:07:44.555555 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb307ffd90cb74ba2588c54ea27063cb99e38eb64852738a662cf218ff11df17\": container with ID starting with eb307ffd90cb74ba2588c54ea27063cb99e38eb64852738a662cf218ff11df17 not found: ID does not exist" containerID="eb307ffd90cb74ba2588c54ea27063cb99e38eb64852738a662cf218ff11df17" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.555702 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb307ffd90cb74ba2588c54ea27063cb99e38eb64852738a662cf218ff11df17"} err="failed to get container status \"eb307ffd90cb74ba2588c54ea27063cb99e38eb64852738a662cf218ff11df17\": rpc error: code = NotFound desc = could not find container \"eb307ffd90cb74ba2588c54ea27063cb99e38eb64852738a662cf218ff11df17\": container with ID starting with eb307ffd90cb74ba2588c54ea27063cb99e38eb64852738a662cf218ff11df17 not found: ID does not exist" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.555758 4832 scope.go:117] "RemoveContainer" containerID="5e24e22169db891af1501611f4cc5750e15aaf1d74543bb6595c4c0ca9421164" Dec 04 07:07:44 crc kubenswrapper[4832]: E1204 07:07:44.557295 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e24e22169db891af1501611f4cc5750e15aaf1d74543bb6595c4c0ca9421164\": container with ID starting with 5e24e22169db891af1501611f4cc5750e15aaf1d74543bb6595c4c0ca9421164 not found: ID does not exist" containerID="5e24e22169db891af1501611f4cc5750e15aaf1d74543bb6595c4c0ca9421164" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.557411 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e24e22169db891af1501611f4cc5750e15aaf1d74543bb6595c4c0ca9421164"} err="failed to get container status \"5e24e22169db891af1501611f4cc5750e15aaf1d74543bb6595c4c0ca9421164\": rpc error: code = NotFound desc = could not find container \"5e24e22169db891af1501611f4cc5750e15aaf1d74543bb6595c4c0ca9421164\": container with ID starting with 5e24e22169db891af1501611f4cc5750e15aaf1d74543bb6595c4c0ca9421164 not found: ID does not exist" Dec 04 07:07:44 crc kubenswrapper[4832]: I1204 07:07:44.723984 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8feec375-35a8-40d6-b2dc-07a42bc511e9" path="/var/lib/kubelet/pods/8feec375-35a8-40d6-b2dc-07a42bc511e9/volumes" Dec 04 07:07:50 crc kubenswrapper[4832]: I1204 07:07:50.526977 4832 generic.go:334] "Generic (PLEG): container finished" podID="5043b95f-b635-4a68-ade8-ff4b40c18a86" containerID="352d55af81fdb00b6dd6decc96e5ff9c842130939a99b19149206995147d2a24" exitCode=0 Dec 04 07:07:50 crc kubenswrapper[4832]: I1204 07:07:50.527040 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" event={"ID":"5043b95f-b635-4a68-ade8-ff4b40c18a86","Type":"ContainerDied","Data":"352d55af81fdb00b6dd6decc96e5ff9c842130939a99b19149206995147d2a24"} Dec 04 07:07:51 crc kubenswrapper[4832]: I1204 07:07:51.634985 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" Dec 04 07:07:51 crc kubenswrapper[4832]: I1204 07:07:51.676469 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qv5m8/crc-debug-dcdkw"] Dec 04 07:07:51 crc kubenswrapper[4832]: I1204 07:07:51.688213 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qv5m8/crc-debug-dcdkw"] Dec 04 07:07:51 crc kubenswrapper[4832]: I1204 07:07:51.766906 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58nmg\" (UniqueName: \"kubernetes.io/projected/5043b95f-b635-4a68-ade8-ff4b40c18a86-kube-api-access-58nmg\") pod \"5043b95f-b635-4a68-ade8-ff4b40c18a86\" (UID: \"5043b95f-b635-4a68-ade8-ff4b40c18a86\") " Dec 04 07:07:51 crc kubenswrapper[4832]: I1204 07:07:51.767061 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5043b95f-b635-4a68-ade8-ff4b40c18a86-host\") pod \"5043b95f-b635-4a68-ade8-ff4b40c18a86\" (UID: \"5043b95f-b635-4a68-ade8-ff4b40c18a86\") " Dec 04 07:07:51 crc kubenswrapper[4832]: I1204 07:07:51.767329 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5043b95f-b635-4a68-ade8-ff4b40c18a86-host" (OuterVolumeSpecName: "host") pod "5043b95f-b635-4a68-ade8-ff4b40c18a86" (UID: "5043b95f-b635-4a68-ade8-ff4b40c18a86"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 07:07:51 crc kubenswrapper[4832]: I1204 07:07:51.767989 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5043b95f-b635-4a68-ade8-ff4b40c18a86-host\") on node \"crc\" DevicePath \"\"" Dec 04 07:07:51 crc kubenswrapper[4832]: I1204 07:07:51.773038 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5043b95f-b635-4a68-ade8-ff4b40c18a86-kube-api-access-58nmg" (OuterVolumeSpecName: "kube-api-access-58nmg") pod "5043b95f-b635-4a68-ade8-ff4b40c18a86" (UID: "5043b95f-b635-4a68-ade8-ff4b40c18a86"). InnerVolumeSpecName "kube-api-access-58nmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:07:51 crc kubenswrapper[4832]: I1204 07:07:51.870431 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58nmg\" (UniqueName: \"kubernetes.io/projected/5043b95f-b635-4a68-ade8-ff4b40c18a86-kube-api-access-58nmg\") on node \"crc\" DevicePath \"\"" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.548717 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d73bc03a71ec8066f293f53866d9dcb904386e64c55b2165e0fdf71a5b4b101" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.549036 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/crc-debug-dcdkw" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.721511 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5043b95f-b635-4a68-ade8-ff4b40c18a86" path="/var/lib/kubelet/pods/5043b95f-b635-4a68-ade8-ff4b40c18a86/volumes" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.865811 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qv5m8/crc-debug-fhdj5"] Dec 04 07:07:52 crc kubenswrapper[4832]: E1204 07:07:52.866267 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8feec375-35a8-40d6-b2dc-07a42bc511e9" containerName="extract-content" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.866286 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8feec375-35a8-40d6-b2dc-07a42bc511e9" containerName="extract-content" Dec 04 07:07:52 crc kubenswrapper[4832]: E1204 07:07:52.866295 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8feec375-35a8-40d6-b2dc-07a42bc511e9" containerName="extract-utilities" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.866302 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8feec375-35a8-40d6-b2dc-07a42bc511e9" containerName="extract-utilities" Dec 04 07:07:52 crc kubenswrapper[4832]: E1204 07:07:52.866314 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5043b95f-b635-4a68-ade8-ff4b40c18a86" containerName="container-00" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.866320 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5043b95f-b635-4a68-ade8-ff4b40c18a86" containerName="container-00" Dec 04 07:07:52 crc kubenswrapper[4832]: E1204 07:07:52.866350 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8feec375-35a8-40d6-b2dc-07a42bc511e9" containerName="registry-server" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.866356 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8feec375-35a8-40d6-b2dc-07a42bc511e9" containerName="registry-server" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.866567 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8feec375-35a8-40d6-b2dc-07a42bc511e9" containerName="registry-server" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.866600 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5043b95f-b635-4a68-ade8-ff4b40c18a86" containerName="container-00" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.867265 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.869799 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qv5m8"/"default-dockercfg-mwhqm" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.994962 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cttzx\" (UniqueName: \"kubernetes.io/projected/77cb800f-2d26-4a5c-b280-b0ab1cd93e54-kube-api-access-cttzx\") pod \"crc-debug-fhdj5\" (UID: \"77cb800f-2d26-4a5c-b280-b0ab1cd93e54\") " pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" Dec 04 07:07:52 crc kubenswrapper[4832]: I1204 07:07:52.995032 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77cb800f-2d26-4a5c-b280-b0ab1cd93e54-host\") pod \"crc-debug-fhdj5\" (UID: \"77cb800f-2d26-4a5c-b280-b0ab1cd93e54\") " pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" Dec 04 07:07:53 crc kubenswrapper[4832]: I1204 07:07:53.097045 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cttzx\" (UniqueName: \"kubernetes.io/projected/77cb800f-2d26-4a5c-b280-b0ab1cd93e54-kube-api-access-cttzx\") pod \"crc-debug-fhdj5\" (UID: \"77cb800f-2d26-4a5c-b280-b0ab1cd93e54\") " pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" Dec 04 07:07:53 crc kubenswrapper[4832]: I1204 07:07:53.097121 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77cb800f-2d26-4a5c-b280-b0ab1cd93e54-host\") pod \"crc-debug-fhdj5\" (UID: \"77cb800f-2d26-4a5c-b280-b0ab1cd93e54\") " pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" Dec 04 07:07:53 crc kubenswrapper[4832]: I1204 07:07:53.097400 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77cb800f-2d26-4a5c-b280-b0ab1cd93e54-host\") pod \"crc-debug-fhdj5\" (UID: \"77cb800f-2d26-4a5c-b280-b0ab1cd93e54\") " pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" Dec 04 07:07:53 crc kubenswrapper[4832]: I1204 07:07:53.124513 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cttzx\" (UniqueName: \"kubernetes.io/projected/77cb800f-2d26-4a5c-b280-b0ab1cd93e54-kube-api-access-cttzx\") pod \"crc-debug-fhdj5\" (UID: \"77cb800f-2d26-4a5c-b280-b0ab1cd93e54\") " pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" Dec 04 07:07:53 crc kubenswrapper[4832]: I1204 07:07:53.187358 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" Dec 04 07:07:53 crc kubenswrapper[4832]: I1204 07:07:53.576099 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" event={"ID":"77cb800f-2d26-4a5c-b280-b0ab1cd93e54","Type":"ContainerStarted","Data":"454f025f23db825be994be82f61c782a9d7b42ff41bf3fe2df776b4d953cfcae"} Dec 04 07:07:53 crc kubenswrapper[4832]: I1204 07:07:53.576756 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" event={"ID":"77cb800f-2d26-4a5c-b280-b0ab1cd93e54","Type":"ContainerStarted","Data":"3902ae3263f0f821697e29ae18eebdc3f5e6fdf27339fa09a000ab028f09b2a0"} Dec 04 07:07:53 crc kubenswrapper[4832]: I1204 07:07:53.594956 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" podStartSLOduration=1.594940533 podStartE2EDuration="1.594940533s" podCreationTimestamp="2025-12-04 07:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 07:07:53.594232255 +0000 UTC m=+3529.207049961" watchObservedRunningTime="2025-12-04 07:07:53.594940533 +0000 UTC m=+3529.207758239" Dec 04 07:07:54 crc kubenswrapper[4832]: I1204 07:07:54.588748 4832 generic.go:334] "Generic (PLEG): container finished" podID="77cb800f-2d26-4a5c-b280-b0ab1cd93e54" containerID="454f025f23db825be994be82f61c782a9d7b42ff41bf3fe2df776b4d953cfcae" exitCode=0 Dec 04 07:07:54 crc kubenswrapper[4832]: I1204 07:07:54.588804 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" event={"ID":"77cb800f-2d26-4a5c-b280-b0ab1cd93e54","Type":"ContainerDied","Data":"454f025f23db825be994be82f61c782a9d7b42ff41bf3fe2df776b4d953cfcae"} Dec 04 07:07:55 crc kubenswrapper[4832]: I1204 07:07:55.740201 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" Dec 04 07:07:55 crc kubenswrapper[4832]: I1204 07:07:55.776746 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qv5m8/crc-debug-fhdj5"] Dec 04 07:07:55 crc kubenswrapper[4832]: I1204 07:07:55.787218 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qv5m8/crc-debug-fhdj5"] Dec 04 07:07:55 crc kubenswrapper[4832]: I1204 07:07:55.855355 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77cb800f-2d26-4a5c-b280-b0ab1cd93e54-host\") pod \"77cb800f-2d26-4a5c-b280-b0ab1cd93e54\" (UID: \"77cb800f-2d26-4a5c-b280-b0ab1cd93e54\") " Dec 04 07:07:55 crc kubenswrapper[4832]: I1204 07:07:55.855443 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cttzx\" (UniqueName: \"kubernetes.io/projected/77cb800f-2d26-4a5c-b280-b0ab1cd93e54-kube-api-access-cttzx\") pod \"77cb800f-2d26-4a5c-b280-b0ab1cd93e54\" (UID: \"77cb800f-2d26-4a5c-b280-b0ab1cd93e54\") " Dec 04 07:07:55 crc kubenswrapper[4832]: I1204 07:07:55.855510 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77cb800f-2d26-4a5c-b280-b0ab1cd93e54-host" (OuterVolumeSpecName: "host") pod "77cb800f-2d26-4a5c-b280-b0ab1cd93e54" (UID: "77cb800f-2d26-4a5c-b280-b0ab1cd93e54"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 07:07:55 crc kubenswrapper[4832]: I1204 07:07:55.856305 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77cb800f-2d26-4a5c-b280-b0ab1cd93e54-host\") on node \"crc\" DevicePath \"\"" Dec 04 07:07:55 crc kubenswrapper[4832]: I1204 07:07:55.866629 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77cb800f-2d26-4a5c-b280-b0ab1cd93e54-kube-api-access-cttzx" (OuterVolumeSpecName: "kube-api-access-cttzx") pod "77cb800f-2d26-4a5c-b280-b0ab1cd93e54" (UID: "77cb800f-2d26-4a5c-b280-b0ab1cd93e54"). InnerVolumeSpecName "kube-api-access-cttzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:07:55 crc kubenswrapper[4832]: I1204 07:07:55.957981 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cttzx\" (UniqueName: \"kubernetes.io/projected/77cb800f-2d26-4a5c-b280-b0ab1cd93e54-kube-api-access-cttzx\") on node \"crc\" DevicePath \"\"" Dec 04 07:07:56 crc kubenswrapper[4832]: I1204 07:07:56.646172 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3902ae3263f0f821697e29ae18eebdc3f5e6fdf27339fa09a000ab028f09b2a0" Dec 04 07:07:56 crc kubenswrapper[4832]: I1204 07:07:56.646224 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/crc-debug-fhdj5" Dec 04 07:07:56 crc kubenswrapper[4832]: I1204 07:07:56.721743 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77cb800f-2d26-4a5c-b280-b0ab1cd93e54" path="/var/lib/kubelet/pods/77cb800f-2d26-4a5c-b280-b0ab1cd93e54/volumes" Dec 04 07:07:56 crc kubenswrapper[4832]: I1204 07:07:56.942326 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qv5m8/crc-debug-5d2ch"] Dec 04 07:07:56 crc kubenswrapper[4832]: E1204 07:07:56.944130 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77cb800f-2d26-4a5c-b280-b0ab1cd93e54" containerName="container-00" Dec 04 07:07:56 crc kubenswrapper[4832]: I1204 07:07:56.944220 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="77cb800f-2d26-4a5c-b280-b0ab1cd93e54" containerName="container-00" Dec 04 07:07:56 crc kubenswrapper[4832]: I1204 07:07:56.944578 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="77cb800f-2d26-4a5c-b280-b0ab1cd93e54" containerName="container-00" Dec 04 07:07:56 crc kubenswrapper[4832]: I1204 07:07:56.945324 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/crc-debug-5d2ch" Dec 04 07:07:56 crc kubenswrapper[4832]: I1204 07:07:56.947414 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qv5m8"/"default-dockercfg-mwhqm" Dec 04 07:07:57 crc kubenswrapper[4832]: I1204 07:07:57.080040 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lqdr\" (UniqueName: \"kubernetes.io/projected/fdb5c203-dac1-49d1-97a2-695dec6aa3b9-kube-api-access-2lqdr\") pod \"crc-debug-5d2ch\" (UID: \"fdb5c203-dac1-49d1-97a2-695dec6aa3b9\") " pod="openshift-must-gather-qv5m8/crc-debug-5d2ch" Dec 04 07:07:57 crc kubenswrapper[4832]: I1204 07:07:57.080341 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fdb5c203-dac1-49d1-97a2-695dec6aa3b9-host\") pod \"crc-debug-5d2ch\" (UID: \"fdb5c203-dac1-49d1-97a2-695dec6aa3b9\") " pod="openshift-must-gather-qv5m8/crc-debug-5d2ch" Dec 04 07:07:57 crc kubenswrapper[4832]: I1204 07:07:57.182472 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lqdr\" (UniqueName: \"kubernetes.io/projected/fdb5c203-dac1-49d1-97a2-695dec6aa3b9-kube-api-access-2lqdr\") pod \"crc-debug-5d2ch\" (UID: \"fdb5c203-dac1-49d1-97a2-695dec6aa3b9\") " pod="openshift-must-gather-qv5m8/crc-debug-5d2ch" Dec 04 07:07:57 crc kubenswrapper[4832]: I1204 07:07:57.182569 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fdb5c203-dac1-49d1-97a2-695dec6aa3b9-host\") pod \"crc-debug-5d2ch\" (UID: \"fdb5c203-dac1-49d1-97a2-695dec6aa3b9\") " pod="openshift-must-gather-qv5m8/crc-debug-5d2ch" Dec 04 07:07:57 crc kubenswrapper[4832]: I1204 07:07:57.182747 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fdb5c203-dac1-49d1-97a2-695dec6aa3b9-host\") pod \"crc-debug-5d2ch\" (UID: \"fdb5c203-dac1-49d1-97a2-695dec6aa3b9\") " pod="openshift-must-gather-qv5m8/crc-debug-5d2ch" Dec 04 07:07:57 crc kubenswrapper[4832]: I1204 07:07:57.201994 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lqdr\" (UniqueName: \"kubernetes.io/projected/fdb5c203-dac1-49d1-97a2-695dec6aa3b9-kube-api-access-2lqdr\") pod \"crc-debug-5d2ch\" (UID: \"fdb5c203-dac1-49d1-97a2-695dec6aa3b9\") " pod="openshift-must-gather-qv5m8/crc-debug-5d2ch" Dec 04 07:07:57 crc kubenswrapper[4832]: I1204 07:07:57.271069 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/crc-debug-5d2ch" Dec 04 07:07:57 crc kubenswrapper[4832]: I1204 07:07:57.659237 4832 generic.go:334] "Generic (PLEG): container finished" podID="fdb5c203-dac1-49d1-97a2-695dec6aa3b9" containerID="33bee5ace6c357a78053b6176f302f5cc7e23e912f52836a1599521a1a175c2a" exitCode=0 Dec 04 07:07:57 crc kubenswrapper[4832]: I1204 07:07:57.659324 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qv5m8/crc-debug-5d2ch" event={"ID":"fdb5c203-dac1-49d1-97a2-695dec6aa3b9","Type":"ContainerDied","Data":"33bee5ace6c357a78053b6176f302f5cc7e23e912f52836a1599521a1a175c2a"} Dec 04 07:07:57 crc kubenswrapper[4832]: I1204 07:07:57.660053 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qv5m8/crc-debug-5d2ch" event={"ID":"fdb5c203-dac1-49d1-97a2-695dec6aa3b9","Type":"ContainerStarted","Data":"576505f833c10ed4e34de464f85d0d0af8b19c25e5af202130ee9a274a8595b4"} Dec 04 07:07:57 crc kubenswrapper[4832]: I1204 07:07:57.717708 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qv5m8/crc-debug-5d2ch"] Dec 04 07:07:57 crc kubenswrapper[4832]: I1204 07:07:57.731799 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qv5m8/crc-debug-5d2ch"] Dec 04 07:07:58 crc kubenswrapper[4832]: I1204 07:07:58.769831 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/crc-debug-5d2ch" Dec 04 07:07:58 crc kubenswrapper[4832]: I1204 07:07:58.815850 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lqdr\" (UniqueName: \"kubernetes.io/projected/fdb5c203-dac1-49d1-97a2-695dec6aa3b9-kube-api-access-2lqdr\") pod \"fdb5c203-dac1-49d1-97a2-695dec6aa3b9\" (UID: \"fdb5c203-dac1-49d1-97a2-695dec6aa3b9\") " Dec 04 07:07:58 crc kubenswrapper[4832]: I1204 07:07:58.816519 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fdb5c203-dac1-49d1-97a2-695dec6aa3b9-host\") pod \"fdb5c203-dac1-49d1-97a2-695dec6aa3b9\" (UID: \"fdb5c203-dac1-49d1-97a2-695dec6aa3b9\") " Dec 04 07:07:58 crc kubenswrapper[4832]: I1204 07:07:58.816655 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdb5c203-dac1-49d1-97a2-695dec6aa3b9-host" (OuterVolumeSpecName: "host") pod "fdb5c203-dac1-49d1-97a2-695dec6aa3b9" (UID: "fdb5c203-dac1-49d1-97a2-695dec6aa3b9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 07:07:58 crc kubenswrapper[4832]: I1204 07:07:58.817232 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fdb5c203-dac1-49d1-97a2-695dec6aa3b9-host\") on node \"crc\" DevicePath \"\"" Dec 04 07:07:58 crc kubenswrapper[4832]: I1204 07:07:58.822538 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb5c203-dac1-49d1-97a2-695dec6aa3b9-kube-api-access-2lqdr" (OuterVolumeSpecName: "kube-api-access-2lqdr") pod "fdb5c203-dac1-49d1-97a2-695dec6aa3b9" (UID: "fdb5c203-dac1-49d1-97a2-695dec6aa3b9"). InnerVolumeSpecName "kube-api-access-2lqdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:07:58 crc kubenswrapper[4832]: I1204 07:07:58.918775 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lqdr\" (UniqueName: \"kubernetes.io/projected/fdb5c203-dac1-49d1-97a2-695dec6aa3b9-kube-api-access-2lqdr\") on node \"crc\" DevicePath \"\"" Dec 04 07:07:59 crc kubenswrapper[4832]: I1204 07:07:59.679781 4832 scope.go:117] "RemoveContainer" containerID="33bee5ace6c357a78053b6176f302f5cc7e23e912f52836a1599521a1a175c2a" Dec 04 07:07:59 crc kubenswrapper[4832]: I1204 07:07:59.679821 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/crc-debug-5d2ch" Dec 04 07:08:00 crc kubenswrapper[4832]: I1204 07:08:00.733120 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb5c203-dac1-49d1-97a2-695dec6aa3b9" path="/var/lib/kubelet/pods/fdb5c203-dac1-49d1-97a2-695dec6aa3b9/volumes" Dec 04 07:08:05 crc kubenswrapper[4832]: I1204 07:08:05.362759 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 07:08:05 crc kubenswrapper[4832]: I1204 07:08:05.363366 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 07:08:05 crc kubenswrapper[4832]: I1204 07:08:05.363442 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 07:08:05 crc kubenswrapper[4832]: I1204 07:08:05.364346 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 07:08:05 crc kubenswrapper[4832]: I1204 07:08:05.364439 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" gracePeriod=600 Dec 04 07:08:05 crc kubenswrapper[4832]: E1204 07:08:05.532148 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:08:05 crc kubenswrapper[4832]: I1204 07:08:05.812794 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" exitCode=0 Dec 04 07:08:05 crc kubenswrapper[4832]: I1204 07:08:05.813679 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac"} Dec 04 07:08:05 crc kubenswrapper[4832]: I1204 07:08:05.813805 4832 scope.go:117] "RemoveContainer" containerID="856a93375e1f4eb567291b7ae816d28801aab20bb53d2043d8e3e06041624af8" Dec 04 07:08:05 crc kubenswrapper[4832]: I1204 07:08:05.814868 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:08:05 crc kubenswrapper[4832]: E1204 07:08:05.815253 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:08:15 crc kubenswrapper[4832]: I1204 07:08:15.086213 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5869d975cd-47z8d_26ad016c-9e7b-49bd-9031-830f8319a79d/barbican-api/0.log" Dec 04 07:08:15 crc kubenswrapper[4832]: I1204 07:08:15.263417 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5869d975cd-47z8d_26ad016c-9e7b-49bd-9031-830f8319a79d/barbican-api-log/0.log" Dec 04 07:08:15 crc kubenswrapper[4832]: I1204 07:08:15.377099 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6955d5c798-vn8dg_94c9f353-9085-4009-b151-3d5f9418148e/barbican-keystone-listener/0.log" Dec 04 07:08:15 crc kubenswrapper[4832]: I1204 07:08:15.407925 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6955d5c798-vn8dg_94c9f353-9085-4009-b151-3d5f9418148e/barbican-keystone-listener-log/0.log" Dec 04 07:08:15 crc kubenswrapper[4832]: I1204 07:08:15.651991 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76cc4f7d9f-s989p_4b753665-a7f4-4c62-b4ee-a0842bbbe487/barbican-worker-log/0.log" Dec 04 07:08:15 crc kubenswrapper[4832]: I1204 07:08:15.661051 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76cc4f7d9f-s989p_4b753665-a7f4-4c62-b4ee-a0842bbbe487/barbican-worker/0.log" Dec 04 07:08:15 crc kubenswrapper[4832]: I1204 07:08:15.817908 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m_45d69295-db9b-4a70-a031-2e19abcf6be1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:15 crc kubenswrapper[4832]: I1204 07:08:15.975257 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2201e018-55df-4295-b234-ae553e00f058/ceilometer-central-agent/0.log" Dec 04 07:08:16 crc kubenswrapper[4832]: I1204 07:08:16.006001 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2201e018-55df-4295-b234-ae553e00f058/ceilometer-notification-agent/0.log" Dec 04 07:08:16 crc kubenswrapper[4832]: I1204 07:08:16.049981 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2201e018-55df-4295-b234-ae553e00f058/proxy-httpd/0.log" Dec 04 07:08:16 crc kubenswrapper[4832]: I1204 07:08:16.311069 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2201e018-55df-4295-b234-ae553e00f058/sg-core/0.log" Dec 04 07:08:16 crc kubenswrapper[4832]: I1204 07:08:16.487664 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9b68da07-a347-452b-85c0-ba171d852d15/cinder-api/0.log" Dec 04 07:08:16 crc kubenswrapper[4832]: I1204 07:08:16.503361 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9b68da07-a347-452b-85c0-ba171d852d15/cinder-api-log/0.log" Dec 04 07:08:16 crc kubenswrapper[4832]: I1204 07:08:16.747842 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3fc4e266-71ba-403f-a4a3-6c7dc12995b7/cinder-scheduler/0.log" Dec 04 07:08:16 crc kubenswrapper[4832]: I1204 07:08:16.801169 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3fc4e266-71ba-403f-a4a3-6c7dc12995b7/probe/0.log" Dec 04 07:08:16 crc kubenswrapper[4832]: I1204 07:08:16.891148 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx_4fdaa066-59c4-4491-961c-d72bb1a75243/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:17 crc kubenswrapper[4832]: I1204 07:08:17.063299 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm_452e59ff-3e14-4082-b812-ff4d5d671b27/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:17 crc kubenswrapper[4832]: I1204 07:08:17.172507 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-wpmzl_fc3ad9fb-9341-4b1f-8b27-ee71d9f37309/init/0.log" Dec 04 07:08:17 crc kubenswrapper[4832]: I1204 07:08:17.356981 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-wpmzl_fc3ad9fb-9341-4b1f-8b27-ee71d9f37309/init/0.log" Dec 04 07:08:17 crc kubenswrapper[4832]: I1204 07:08:17.408162 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-t96k5_a88a60b4-19c2-4ef9-b586-2b6733219e7a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:17 crc kubenswrapper[4832]: I1204 07:08:17.440191 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-wpmzl_fc3ad9fb-9341-4b1f-8b27-ee71d9f37309/dnsmasq-dns/0.log" Dec 04 07:08:17 crc kubenswrapper[4832]: I1204 07:08:17.678074 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_19e43c1f-dcda-45c3-84aa-fe00d987d334/glance-httpd/0.log" Dec 04 07:08:17 crc kubenswrapper[4832]: I1204 07:08:17.719238 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_19e43c1f-dcda-45c3-84aa-fe00d987d334/glance-log/0.log" Dec 04 07:08:17 crc kubenswrapper[4832]: I1204 07:08:17.898226 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_44deec12-659d-4dcf-a08b-252f6a004f0b/glance-log/0.log" Dec 04 07:08:17 crc kubenswrapper[4832]: I1204 07:08:17.998152 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_44deec12-659d-4dcf-a08b-252f6a004f0b/glance-httpd/0.log" Dec 04 07:08:18 crc kubenswrapper[4832]: I1204 07:08:18.100500 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-847bcdcbb8-ph9ks_a75235c9-c000-495b-92d7-797733f10601/horizon/0.log" Dec 04 07:08:18 crc kubenswrapper[4832]: I1204 07:08:18.356492 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj_fa905232-11b8-4af4-96a8-7a7ef46bf17d/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:18 crc kubenswrapper[4832]: I1204 07:08:18.494097 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-847bcdcbb8-ph9ks_a75235c9-c000-495b-92d7-797733f10601/horizon-log/0.log" Dec 04 07:08:18 crc kubenswrapper[4832]: I1204 07:08:18.628980 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-nhghp_069bfa79-a14b-4545-a791-be3f21ed774f/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:18 crc kubenswrapper[4832]: I1204 07:08:18.873543 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5fb459446f-clqb5_dceba324-ac23-407c-ac0d-7ca4abce124d/keystone-api/0.log" Dec 04 07:08:18 crc kubenswrapper[4832]: I1204 07:08:18.907656 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29413861-44mgd_38e7bb37-9dd3-4010-89f6-e49c6d710eab/keystone-cron/0.log" Dec 04 07:08:19 crc kubenswrapper[4832]: I1204 07:08:19.024880 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a8f66227-3513-4327-81ec-2f1f147294e8/kube-state-metrics/0.log" Dec 04 07:08:19 crc kubenswrapper[4832]: I1204 07:08:19.183379 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2_9e2d5cc2-a6c8-4953-8c34-650c047c5848/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:19 crc kubenswrapper[4832]: I1204 07:08:19.713085 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:08:19 crc kubenswrapper[4832]: E1204 07:08:19.713562 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:08:19 crc kubenswrapper[4832]: I1204 07:08:19.759024 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bdbc57ff5-2cpdh_0864aed7-87aa-47d4-b38e-17d8863bb83e/neutron-api/0.log" Dec 04 07:08:19 crc kubenswrapper[4832]: I1204 07:08:19.818442 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx_27b9693c-3bb0-4819-bbcf-87634b6bb8e3/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:19 crc kubenswrapper[4832]: I1204 07:08:19.828171 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bdbc57ff5-2cpdh_0864aed7-87aa-47d4-b38e-17d8863bb83e/neutron-httpd/0.log" Dec 04 07:08:20 crc kubenswrapper[4832]: I1204 07:08:20.472604 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4/nova-api-log/0.log" Dec 04 07:08:20 crc kubenswrapper[4832]: I1204 07:08:20.556316 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_83d2f2b1-c068-4912-9c17-adb96b1d9233/nova-cell0-conductor-conductor/0.log" Dec 04 07:08:20 crc kubenswrapper[4832]: I1204 07:08:20.716258 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4/nova-api-api/0.log" Dec 04 07:08:20 crc kubenswrapper[4832]: I1204 07:08:20.869842 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d02a8c92-b3b4-4c91-8d11-e0937e4928e0/nova-cell1-conductor-conductor/0.log" Dec 04 07:08:21 crc kubenswrapper[4832]: I1204 07:08:21.000889 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_1c696d5d-0ba4-4406-aa1e-ef9d99df8136/nova-cell1-novncproxy-novncproxy/0.log" Dec 04 07:08:21 crc kubenswrapper[4832]: I1204 07:08:21.123631 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-gklbz_dcd764c1-4caa-4556-a755-3237f104b88e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:21 crc kubenswrapper[4832]: I1204 07:08:21.398379 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2f30e5fb-d9e0-4048-9e6e-3559465be9d4/nova-metadata-log/0.log" Dec 04 07:08:21 crc kubenswrapper[4832]: I1204 07:08:21.591613 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4e159726-cef9-46df-b183-6b0b2f5b013e/nova-scheduler-scheduler/0.log" Dec 04 07:08:21 crc kubenswrapper[4832]: I1204 07:08:21.646995 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9841a1c2-83f5-475b-8180-b1e9cd13467b/mysql-bootstrap/0.log" Dec 04 07:08:21 crc kubenswrapper[4832]: I1204 07:08:21.853351 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9841a1c2-83f5-475b-8180-b1e9cd13467b/mysql-bootstrap/0.log" Dec 04 07:08:21 crc kubenswrapper[4832]: I1204 07:08:21.872366 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9841a1c2-83f5-475b-8180-b1e9cd13467b/galera/0.log" Dec 04 07:08:22 crc kubenswrapper[4832]: I1204 07:08:22.078927 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_22fcd5ed-0004-4329-b8c6-7855939765dc/mysql-bootstrap/0.log" Dec 04 07:08:22 crc kubenswrapper[4832]: I1204 07:08:22.313536 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_22fcd5ed-0004-4329-b8c6-7855939765dc/mysql-bootstrap/0.log" Dec 04 07:08:22 crc kubenswrapper[4832]: I1204 07:08:22.365202 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_22fcd5ed-0004-4329-b8c6-7855939765dc/galera/0.log" Dec 04 07:08:22 crc kubenswrapper[4832]: I1204 07:08:22.521657 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4936447c-abfd-4bad-b720-db17f1bca70c/openstackclient/0.log" Dec 04 07:08:22 crc kubenswrapper[4832]: I1204 07:08:22.549299 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2f30e5fb-d9e0-4048-9e6e-3559465be9d4/nova-metadata-metadata/0.log" Dec 04 07:08:22 crc kubenswrapper[4832]: I1204 07:08:22.613919 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kcxl8_6de6fb2f-c87b-41af-8e93-05d7da0fad2a/ovn-controller/0.log" Dec 04 07:08:23 crc kubenswrapper[4832]: I1204 07:08:23.035755 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xv2xq_bc55e7c7-bd80-4440-922d-d0711af4b912/openstack-network-exporter/0.log" Dec 04 07:08:23 crc kubenswrapper[4832]: I1204 07:08:23.060279 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m96v7_f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf/ovsdb-server-init/0.log" Dec 04 07:08:23 crc kubenswrapper[4832]: I1204 07:08:23.285869 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m96v7_f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf/ovsdb-server/0.log" Dec 04 07:08:23 crc kubenswrapper[4832]: I1204 07:08:23.374505 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m96v7_f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf/ovs-vswitchd/0.log" Dec 04 07:08:23 crc kubenswrapper[4832]: I1204 07:08:23.388226 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m96v7_f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf/ovsdb-server-init/0.log" Dec 04 07:08:23 crc kubenswrapper[4832]: I1204 07:08:23.573568 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9j4xg_7cf92a16-6e70-4b63-a14e-30a5b041a80f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:23 crc kubenswrapper[4832]: I1204 07:08:23.648743 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9b61be98-5007-43c6-b717-dae011be5830/openstack-network-exporter/0.log" Dec 04 07:08:23 crc kubenswrapper[4832]: I1204 07:08:23.677932 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9b61be98-5007-43c6-b717-dae011be5830/ovn-northd/0.log" Dec 04 07:08:23 crc kubenswrapper[4832]: I1204 07:08:23.797096 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8f89488d-b176-4bf6-9172-ed2fc6492019/openstack-network-exporter/0.log" Dec 04 07:08:23 crc kubenswrapper[4832]: I1204 07:08:23.870902 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8f89488d-b176-4bf6-9172-ed2fc6492019/ovsdbserver-nb/0.log" Dec 04 07:08:24 crc kubenswrapper[4832]: I1204 07:08:24.058408 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a470eda9-a394-4ecb-a723-404f00bbd45a/openstack-network-exporter/0.log" Dec 04 07:08:24 crc kubenswrapper[4832]: I1204 07:08:24.091268 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a470eda9-a394-4ecb-a723-404f00bbd45a/ovsdbserver-sb/0.log" Dec 04 07:08:24 crc kubenswrapper[4832]: I1204 07:08:24.304216 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-56555b86cd-htxqh_d282cab8-b359-4fc9-9f34-95b8b1984106/placement-api/0.log" Dec 04 07:08:24 crc kubenswrapper[4832]: I1204 07:08:24.415759 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-56555b86cd-htxqh_d282cab8-b359-4fc9-9f34-95b8b1984106/placement-log/0.log" Dec 04 07:08:24 crc kubenswrapper[4832]: I1204 07:08:24.495859 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_65d1124e-f647-4d3c-b10e-c01691fb6c9b/setup-container/0.log" Dec 04 07:08:24 crc kubenswrapper[4832]: I1204 07:08:24.685758 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_65d1124e-f647-4d3c-b10e-c01691fb6c9b/setup-container/0.log" Dec 04 07:08:24 crc kubenswrapper[4832]: I1204 07:08:24.763273 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_65d1124e-f647-4d3c-b10e-c01691fb6c9b/rabbitmq/0.log" Dec 04 07:08:24 crc kubenswrapper[4832]: I1204 07:08:24.769205 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5152b11-80fa-4fd7-90df-132972214b18/setup-container/0.log" Dec 04 07:08:24 crc kubenswrapper[4832]: I1204 07:08:24.990523 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5152b11-80fa-4fd7-90df-132972214b18/rabbitmq/0.log" Dec 04 07:08:25 crc kubenswrapper[4832]: I1204 07:08:25.043090 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5152b11-80fa-4fd7-90df-132972214b18/setup-container/0.log" Dec 04 07:08:25 crc kubenswrapper[4832]: I1204 07:08:25.132081 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc_98e6fcf2-9409-4e06-846b-d96d4106e2b8/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:25 crc kubenswrapper[4832]: I1204 07:08:25.310577 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-dzcnn_98f3d48c-b338-4b27-893d-f83a3e55ccd8/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:25 crc kubenswrapper[4832]: I1204 07:08:25.384767 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh_2f4cc7d6-382c-46c7-a422-d728d7d8aa19/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:25 crc kubenswrapper[4832]: I1204 07:08:25.522984 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-95gcb_cbeb7492-95ba-4887-afee-a0fada68f151/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:25 crc kubenswrapper[4832]: I1204 07:08:25.691537 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-ztwr8_54ad7a01-5a9d-4735-b86d-391a24a663ad/ssh-known-hosts-edpm-deployment/0.log" Dec 04 07:08:26 crc kubenswrapper[4832]: I1204 07:08:26.003787 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8675b9cf45-xl2pz_6b7c2a80-b3fe-4243-9ea6-19e34f132a16/proxy-httpd/0.log" Dec 04 07:08:26 crc kubenswrapper[4832]: I1204 07:08:26.236261 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8675b9cf45-xl2pz_6b7c2a80-b3fe-4243-9ea6-19e34f132a16/proxy-server/0.log" Dec 04 07:08:26 crc kubenswrapper[4832]: I1204 07:08:26.314966 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vnbbf_2aaa5481-3d69-438a-80be-5511ecc55ddf/swift-ring-rebalance/0.log" Dec 04 07:08:26 crc kubenswrapper[4832]: I1204 07:08:26.489023 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/account-reaper/0.log" Dec 04 07:08:26 crc kubenswrapper[4832]: I1204 07:08:26.495996 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/account-auditor/0.log" Dec 04 07:08:26 crc kubenswrapper[4832]: I1204 07:08:26.620139 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/account-replicator/0.log" Dec 04 07:08:26 crc kubenswrapper[4832]: I1204 07:08:26.648107 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/account-server/0.log" Dec 04 07:08:26 crc kubenswrapper[4832]: I1204 07:08:26.775839 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/container-replicator/0.log" Dec 04 07:08:26 crc kubenswrapper[4832]: I1204 07:08:26.790859 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/container-auditor/0.log" Dec 04 07:08:26 crc kubenswrapper[4832]: I1204 07:08:26.852547 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/container-server/0.log" Dec 04 07:08:26 crc kubenswrapper[4832]: I1204 07:08:26.887216 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/container-updater/0.log" Dec 04 07:08:27 crc kubenswrapper[4832]: I1204 07:08:27.000695 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/object-expirer/0.log" Dec 04 07:08:27 crc kubenswrapper[4832]: I1204 07:08:27.078348 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/object-auditor/0.log" Dec 04 07:08:27 crc kubenswrapper[4832]: I1204 07:08:27.126072 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/object-replicator/0.log" Dec 04 07:08:27 crc kubenswrapper[4832]: I1204 07:08:27.147051 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/object-server/0.log" Dec 04 07:08:27 crc kubenswrapper[4832]: I1204 07:08:27.269366 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/object-updater/0.log" Dec 04 07:08:27 crc kubenswrapper[4832]: I1204 07:08:27.389572 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/swift-recon-cron/0.log" Dec 04 07:08:27 crc kubenswrapper[4832]: I1204 07:08:27.457898 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/rsync/0.log" Dec 04 07:08:27 crc kubenswrapper[4832]: I1204 07:08:27.579628 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p_69a026e8-d207-4ccd-86c7-6e646a80529c/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:27 crc kubenswrapper[4832]: I1204 07:08:27.682889 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_068b63a2-ea9f-4022-8a42-8d345222f5a7/tempest-tests-tempest-tests-runner/0.log" Dec 04 07:08:27 crc kubenswrapper[4832]: I1204 07:08:27.910497 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b6d3a092-b799-497b-9ca7-10f0578b0f7b/test-operator-logs-container/0.log" Dec 04 07:08:28 crc kubenswrapper[4832]: I1204 07:08:28.032367 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-smwz7_c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:08:32 crc kubenswrapper[4832]: I1204 07:08:32.710702 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:08:32 crc kubenswrapper[4832]: E1204 07:08:32.711568 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:08:35 crc kubenswrapper[4832]: I1204 07:08:35.433853 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e3b075a1-3f92-493c-93d2-a776141dba44/memcached/0.log" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.416586 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qqlgj"] Dec 04 07:08:46 crc kubenswrapper[4832]: E1204 07:08:46.417783 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb5c203-dac1-49d1-97a2-695dec6aa3b9" containerName="container-00" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.417802 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb5c203-dac1-49d1-97a2-695dec6aa3b9" containerName="container-00" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.418057 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb5c203-dac1-49d1-97a2-695dec6aa3b9" containerName="container-00" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.419867 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.451688 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqlgj"] Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.508670 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a2318a-ae04-4bcd-a180-fa36f725d4a1-utilities\") pod \"redhat-operators-qqlgj\" (UID: \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\") " pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.508743 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxgn8\" (UniqueName: \"kubernetes.io/projected/12a2318a-ae04-4bcd-a180-fa36f725d4a1-kube-api-access-kxgn8\") pod \"redhat-operators-qqlgj\" (UID: \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\") " pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.508821 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a2318a-ae04-4bcd-a180-fa36f725d4a1-catalog-content\") pod \"redhat-operators-qqlgj\" (UID: \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\") " pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.610803 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a2318a-ae04-4bcd-a180-fa36f725d4a1-catalog-content\") pod \"redhat-operators-qqlgj\" (UID: \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\") " pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.610966 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a2318a-ae04-4bcd-a180-fa36f725d4a1-utilities\") pod \"redhat-operators-qqlgj\" (UID: \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\") " pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.611001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxgn8\" (UniqueName: \"kubernetes.io/projected/12a2318a-ae04-4bcd-a180-fa36f725d4a1-kube-api-access-kxgn8\") pod \"redhat-operators-qqlgj\" (UID: \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\") " pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.611438 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a2318a-ae04-4bcd-a180-fa36f725d4a1-catalog-content\") pod \"redhat-operators-qqlgj\" (UID: \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\") " pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.611589 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a2318a-ae04-4bcd-a180-fa36f725d4a1-utilities\") pod \"redhat-operators-qqlgj\" (UID: \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\") " pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.636272 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxgn8\" (UniqueName: \"kubernetes.io/projected/12a2318a-ae04-4bcd-a180-fa36f725d4a1-kube-api-access-kxgn8\") pod \"redhat-operators-qqlgj\" (UID: \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\") " pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.710728 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:08:46 crc kubenswrapper[4832]: E1204 07:08:46.711082 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:08:46 crc kubenswrapper[4832]: I1204 07:08:46.786314 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:47 crc kubenswrapper[4832]: I1204 07:08:47.272163 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqlgj"] Dec 04 07:08:47 crc kubenswrapper[4832]: I1204 07:08:47.394328 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqlgj" event={"ID":"12a2318a-ae04-4bcd-a180-fa36f725d4a1","Type":"ContainerStarted","Data":"66ef581b3b744ebf4a90fee9f72843648c2d78a2e7e5e84e5599a5ff366af463"} Dec 04 07:08:48 crc kubenswrapper[4832]: I1204 07:08:48.407117 4832 generic.go:334] "Generic (PLEG): container finished" podID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" containerID="af04836a0da92676efb89264fdb29ed3ba979fe87615df337a13ba50276ca3ca" exitCode=0 Dec 04 07:08:48 crc kubenswrapper[4832]: I1204 07:08:48.407182 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqlgj" event={"ID":"12a2318a-ae04-4bcd-a180-fa36f725d4a1","Type":"ContainerDied","Data":"af04836a0da92676efb89264fdb29ed3ba979fe87615df337a13ba50276ca3ca"} Dec 04 07:08:49 crc kubenswrapper[4832]: I1204 07:08:49.420841 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqlgj" event={"ID":"12a2318a-ae04-4bcd-a180-fa36f725d4a1","Type":"ContainerStarted","Data":"a21f82fe193045960dfd4d5d952b4c8f50cf7fd53da2a9f80b71d941affd417e"} Dec 04 07:08:50 crc kubenswrapper[4832]: I1204 07:08:50.434005 4832 generic.go:334] "Generic (PLEG): container finished" podID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" containerID="a21f82fe193045960dfd4d5d952b4c8f50cf7fd53da2a9f80b71d941affd417e" exitCode=0 Dec 04 07:08:50 crc kubenswrapper[4832]: I1204 07:08:50.434111 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqlgj" event={"ID":"12a2318a-ae04-4bcd-a180-fa36f725d4a1","Type":"ContainerDied","Data":"a21f82fe193045960dfd4d5d952b4c8f50cf7fd53da2a9f80b71d941affd417e"} Dec 04 07:08:51 crc kubenswrapper[4832]: I1204 07:08:51.447072 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqlgj" event={"ID":"12a2318a-ae04-4bcd-a180-fa36f725d4a1","Type":"ContainerStarted","Data":"da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a"} Dec 04 07:08:51 crc kubenswrapper[4832]: I1204 07:08:51.469670 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qqlgj" podStartSLOduration=2.985749359 podStartE2EDuration="5.469643986s" podCreationTimestamp="2025-12-04 07:08:46 +0000 UTC" firstStartedPulling="2025-12-04 07:08:48.409620017 +0000 UTC m=+3584.022437723" lastFinishedPulling="2025-12-04 07:08:50.893514624 +0000 UTC m=+3586.506332350" observedRunningTime="2025-12-04 07:08:51.467002891 +0000 UTC m=+3587.079820587" watchObservedRunningTime="2025-12-04 07:08:51.469643986 +0000 UTC m=+3587.082461692" Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.177521 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m5xwb"] Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.180459 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.239358 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5xwb"] Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.318546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-utilities\") pod \"redhat-marketplace-m5xwb\" (UID: \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\") " pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.318638 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn7lc\" (UniqueName: \"kubernetes.io/projected/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-kube-api-access-mn7lc\") pod \"redhat-marketplace-m5xwb\" (UID: \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\") " pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.318774 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-catalog-content\") pod \"redhat-marketplace-m5xwb\" (UID: \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\") " pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.421473 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn7lc\" (UniqueName: \"kubernetes.io/projected/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-kube-api-access-mn7lc\") pod \"redhat-marketplace-m5xwb\" (UID: \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\") " pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.421711 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-catalog-content\") pod \"redhat-marketplace-m5xwb\" (UID: \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\") " pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.421804 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-utilities\") pod \"redhat-marketplace-m5xwb\" (UID: \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\") " pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.422434 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-utilities\") pod \"redhat-marketplace-m5xwb\" (UID: \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\") " pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.422641 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-catalog-content\") pod \"redhat-marketplace-m5xwb\" (UID: \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\") " pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.451675 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn7lc\" (UniqueName: \"kubernetes.io/projected/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-kube-api-access-mn7lc\") pod \"redhat-marketplace-m5xwb\" (UID: \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\") " pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:08:55 crc kubenswrapper[4832]: I1204 07:08:55.549195 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.010165 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/util/0.log" Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.145467 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5xwb"] Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.193295 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/util/0.log" Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.366871 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/pull/0.log" Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.368185 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/pull/0.log" Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.506039 4832 generic.go:334] "Generic (PLEG): container finished" podID="f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" containerID="f6e0917d3dc2cdb66e9ca03e66bd0e882aa4884a1005b6b8665e5fa9d630ef69" exitCode=0 Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.506090 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xwb" event={"ID":"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc","Type":"ContainerDied","Data":"f6e0917d3dc2cdb66e9ca03e66bd0e882aa4884a1005b6b8665e5fa9d630ef69"} Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.506127 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xwb" event={"ID":"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc","Type":"ContainerStarted","Data":"6fa8ef881a743ae4d7da06ec0323a7c2bb5e80790e8f56b3d3fea553a8acb204"} Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.518619 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/util/0.log" Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.555355 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/pull/0.log" Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.602157 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/extract/0.log" Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.752425 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-vjxxr_a85cdbe2-2e25-43b2-bcad-55aaf1e6755d/kube-rbac-proxy/0.log" Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.786602 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.786660 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.832863 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-vjxxr_a85cdbe2-2e25-43b2-bcad-55aaf1e6755d/manager/0.log" Dec 04 07:08:56 crc kubenswrapper[4832]: I1204 07:08:56.934217 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-wwmfh_ef8f8bec-efa4-4239-839d-791aed710641/kube-rbac-proxy/0.log" Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.009944 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-wwmfh_ef8f8bec-efa4-4239-839d-791aed710641/manager/0.log" Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.157937 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-9cmtc_49edbb71-76d8-4f14-986d-9fd821c55ff4/kube-rbac-proxy/0.log" Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.204058 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-9cmtc_49edbb71-76d8-4f14-986d-9fd821c55ff4/manager/0.log" Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.367202 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-s5wdp_f17d47bc-9039-4195-bdbd-e9f58d4c305b/kube-rbac-proxy/0.log" Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.427608 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-s5wdp_f17d47bc-9039-4195-bdbd-e9f58d4c305b/manager/0.log" Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.504337 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-7x9qz_860c33f9-d57a-45b6-bc73-670d92e753a4/kube-rbac-proxy/0.log" Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.520347 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xwb" event={"ID":"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc","Type":"ContainerStarted","Data":"a7b61c82af6461d63f10618eb598cb80d2a0958a68f4ee9dc3b0a0997c5c05d2"} Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.710538 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:08:57 crc kubenswrapper[4832]: E1204 07:08:57.710836 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.724163 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-8qb2w_e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f/kube-rbac-proxy/0.log" Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.833421 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-7x9qz_860c33f9-d57a-45b6-bc73-670d92e753a4/manager/0.log" Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.839317 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qqlgj" podUID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" containerName="registry-server" probeResult="failure" output=< Dec 04 07:08:57 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Dec 04 07:08:57 crc kubenswrapper[4832]: > Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.858129 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-8qb2w_e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f/manager/0.log" Dec 04 07:08:57 crc kubenswrapper[4832]: I1204 07:08:57.970579 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-wr29d_69747c52-1139-4d71-be0d-d6b8d534f0bf/kube-rbac-proxy/0.log" Dec 04 07:08:58 crc kubenswrapper[4832]: I1204 07:08:58.166568 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-6shfb_2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9/kube-rbac-proxy/0.log" Dec 04 07:08:58 crc kubenswrapper[4832]: I1204 07:08:58.189412 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-wr29d_69747c52-1139-4d71-be0d-d6b8d534f0bf/manager/0.log" Dec 04 07:08:58 crc kubenswrapper[4832]: I1204 07:08:58.405143 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-xd7gs_84bf2c21-9b47-46f8-970e-e2e34c5d0112/kube-rbac-proxy/0.log" Dec 04 07:08:58 crc kubenswrapper[4832]: I1204 07:08:58.414418 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-6shfb_2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9/manager/0.log" Dec 04 07:08:58 crc kubenswrapper[4832]: I1204 07:08:58.480634 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-xd7gs_84bf2c21-9b47-46f8-970e-e2e34c5d0112/manager/0.log" Dec 04 07:08:58 crc kubenswrapper[4832]: I1204 07:08:58.531256 4832 generic.go:334] "Generic (PLEG): container finished" podID="f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" containerID="a7b61c82af6461d63f10618eb598cb80d2a0958a68f4ee9dc3b0a0997c5c05d2" exitCode=0 Dec 04 07:08:58 crc kubenswrapper[4832]: I1204 07:08:58.531306 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xwb" event={"ID":"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc","Type":"ContainerDied","Data":"a7b61c82af6461d63f10618eb598cb80d2a0958a68f4ee9dc3b0a0997c5c05d2"} Dec 04 07:08:58 crc kubenswrapper[4832]: I1204 07:08:58.657722 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-zd4wx_7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df/kube-rbac-proxy/0.log" Dec 04 07:08:58 crc kubenswrapper[4832]: I1204 07:08:58.667894 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-zd4wx_7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df/manager/0.log" Dec 04 07:08:58 crc kubenswrapper[4832]: I1204 07:08:58.806741 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-hwpjd_c0cedc81-309b-4d1f-8349-632ca9d38e96/kube-rbac-proxy/0.log" Dec 04 07:08:58 crc kubenswrapper[4832]: I1204 07:08:58.897145 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-hwpjd_c0cedc81-309b-4d1f-8349-632ca9d38e96/manager/0.log" Dec 04 07:08:59 crc kubenswrapper[4832]: I1204 07:08:59.027604 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-dr2cc_35d20429-0e0e-4090-8d0b-9a590e8fd9ab/kube-rbac-proxy/0.log" Dec 04 07:08:59 crc kubenswrapper[4832]: I1204 07:08:59.104408 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-dr2cc_35d20429-0e0e-4090-8d0b-9a590e8fd9ab/manager/0.log" Dec 04 07:08:59 crc kubenswrapper[4832]: I1204 07:08:59.240545 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-djqmz_81848f9c-5ee4-4fbc-a744-701009bcbe53/kube-rbac-proxy/0.log" Dec 04 07:08:59 crc kubenswrapper[4832]: I1204 07:08:59.349877 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-djqmz_81848f9c-5ee4-4fbc-a744-701009bcbe53/manager/0.log" Dec 04 07:08:59 crc kubenswrapper[4832]: I1204 07:08:59.460450 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-zgnkq_e8500aa8-6a4f-4d7b-8939-eab62a946850/kube-rbac-proxy/0.log" Dec 04 07:08:59 crc kubenswrapper[4832]: I1204 07:08:59.466168 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-zgnkq_e8500aa8-6a4f-4d7b-8939-eab62a946850/manager/0.log" Dec 04 07:08:59 crc kubenswrapper[4832]: I1204 07:08:59.565724 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xwb" event={"ID":"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc","Type":"ContainerStarted","Data":"04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f"} Dec 04 07:08:59 crc kubenswrapper[4832]: I1204 07:08:59.586080 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m5xwb" podStartSLOduration=2.158261599 podStartE2EDuration="4.586055908s" podCreationTimestamp="2025-12-04 07:08:55 +0000 UTC" firstStartedPulling="2025-12-04 07:08:56.508425259 +0000 UTC m=+3592.121242965" lastFinishedPulling="2025-12-04 07:08:58.936219568 +0000 UTC m=+3594.549037274" observedRunningTime="2025-12-04 07:08:59.582614254 +0000 UTC m=+3595.195431960" watchObservedRunningTime="2025-12-04 07:08:59.586055908 +0000 UTC m=+3595.198873614" Dec 04 07:08:59 crc kubenswrapper[4832]: I1204 07:08:59.693131 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr_4226c957-fd5d-4b1d-84ca-a94e76ff138c/kube-rbac-proxy/0.log" Dec 04 07:08:59 crc kubenswrapper[4832]: I1204 07:08:59.766927 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr_4226c957-fd5d-4b1d-84ca-a94e76ff138c/manager/0.log" Dec 04 07:09:00 crc kubenswrapper[4832]: I1204 07:09:00.103303 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hzcq6_37203d32-9ea9-4649-b269-71beabc056f9/registry-server/0.log" Dec 04 07:09:00 crc kubenswrapper[4832]: I1204 07:09:00.152566 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7c75cfccc8-zchmr_5413f6c9-52d6-44d8-b58b-babf5f5d4541/operator/0.log" Dec 04 07:09:00 crc kubenswrapper[4832]: I1204 07:09:00.265260 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lbvnf_63f185bd-a5f7-40a2-b51f-f60bf2c161a9/kube-rbac-proxy/0.log" Dec 04 07:09:00 crc kubenswrapper[4832]: I1204 07:09:00.481219 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lbvnf_63f185bd-a5f7-40a2-b51f-f60bf2c161a9/manager/0.log" Dec 04 07:09:00 crc kubenswrapper[4832]: I1204 07:09:00.545405 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-lr247_7184b79e-0476-4d6d-99f3-329ad46dff61/kube-rbac-proxy/0.log" Dec 04 07:09:00 crc kubenswrapper[4832]: I1204 07:09:00.573635 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-lr247_7184b79e-0476-4d6d-99f3-329ad46dff61/manager/0.log" Dec 04 07:09:00 crc kubenswrapper[4832]: I1204 07:09:00.811655 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-wjbl9_cac84290-1321-4a86-a4c0-06019e9d5dfd/kube-rbac-proxy/0.log" Dec 04 07:09:00 crc kubenswrapper[4832]: I1204 07:09:00.859558 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qx7fl_626ec042-7ccd-4a54-8625-de8861efca16/operator/0.log" Dec 04 07:09:01 crc kubenswrapper[4832]: I1204 07:09:01.099165 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5986db9d67-699q9_57013f06-c328-4c9c-b4c9-284df662cc0e/manager/0.log" Dec 04 07:09:01 crc kubenswrapper[4832]: I1204 07:09:01.120230 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-wjbl9_cac84290-1321-4a86-a4c0-06019e9d5dfd/manager/0.log" Dec 04 07:09:01 crc kubenswrapper[4832]: I1204 07:09:01.158455 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-fl4dk_ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea/kube-rbac-proxy/0.log" Dec 04 07:09:01 crc kubenswrapper[4832]: I1204 07:09:01.287997 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-fl4dk_ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea/manager/0.log" Dec 04 07:09:01 crc kubenswrapper[4832]: I1204 07:09:01.340357 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-zc52r_f897a405-3157-4e56-b2b8-1076557cab9e/kube-rbac-proxy/0.log" Dec 04 07:09:01 crc kubenswrapper[4832]: I1204 07:09:01.404550 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-zc52r_f897a405-3157-4e56-b2b8-1076557cab9e/manager/0.log" Dec 04 07:09:01 crc kubenswrapper[4832]: I1204 07:09:01.733575 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-htcvz_e6d35b26-0a9e-4174-a073-d0a608dbafcd/kube-rbac-proxy/0.log" Dec 04 07:09:01 crc kubenswrapper[4832]: I1204 07:09:01.735119 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-htcvz_e6d35b26-0a9e-4174-a073-d0a608dbafcd/manager/0.log" Dec 04 07:09:05 crc kubenswrapper[4832]: I1204 07:09:05.552535 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:09:05 crc kubenswrapper[4832]: I1204 07:09:05.553366 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:09:05 crc kubenswrapper[4832]: I1204 07:09:05.615025 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:09:05 crc kubenswrapper[4832]: I1204 07:09:05.716981 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:09:05 crc kubenswrapper[4832]: I1204 07:09:05.851588 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5xwb"] Dec 04 07:09:06 crc kubenswrapper[4832]: I1204 07:09:06.844970 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:09:06 crc kubenswrapper[4832]: I1204 07:09:06.908043 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:09:07 crc kubenswrapper[4832]: I1204 07:09:07.684869 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m5xwb" podUID="f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" containerName="registry-server" containerID="cri-o://04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f" gracePeriod=2 Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.212984 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.309704 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqlgj"] Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.324358 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-catalog-content\") pod \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\" (UID: \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\") " Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.324623 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-utilities\") pod \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\" (UID: \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\") " Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.324681 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn7lc\" (UniqueName: \"kubernetes.io/projected/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-kube-api-access-mn7lc\") pod \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\" (UID: \"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc\") " Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.326427 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-utilities" (OuterVolumeSpecName: "utilities") pod "f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" (UID: "f9785194-82e5-4e80-bcd9-eaeaa76b9bbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.344830 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-kube-api-access-mn7lc" (OuterVolumeSpecName: "kube-api-access-mn7lc") pod "f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" (UID: "f9785194-82e5-4e80-bcd9-eaeaa76b9bbc"). InnerVolumeSpecName "kube-api-access-mn7lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.368874 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" (UID: "f9785194-82e5-4e80-bcd9-eaeaa76b9bbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.426604 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.426932 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn7lc\" (UniqueName: \"kubernetes.io/projected/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-kube-api-access-mn7lc\") on node \"crc\" DevicePath \"\"" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.426995 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.697071 4832 generic.go:334] "Generic (PLEG): container finished" podID="f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" containerID="04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f" exitCode=0 Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.697129 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xwb" event={"ID":"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc","Type":"ContainerDied","Data":"04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f"} Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.697525 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xwb" event={"ID":"f9785194-82e5-4e80-bcd9-eaeaa76b9bbc","Type":"ContainerDied","Data":"6fa8ef881a743ae4d7da06ec0323a7c2bb5e80790e8f56b3d3fea553a8acb204"} Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.697566 4832 scope.go:117] "RemoveContainer" containerID="04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.697189 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5xwb" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.697743 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qqlgj" podUID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" containerName="registry-server" containerID="cri-o://da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a" gracePeriod=2 Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.726322 4832 scope.go:117] "RemoveContainer" containerID="a7b61c82af6461d63f10618eb598cb80d2a0958a68f4ee9dc3b0a0997c5c05d2" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.748484 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5xwb"] Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.770456 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5xwb"] Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.781791 4832 scope.go:117] "RemoveContainer" containerID="f6e0917d3dc2cdb66e9ca03e66bd0e882aa4884a1005b6b8665e5fa9d630ef69" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.953074 4832 scope.go:117] "RemoveContainer" containerID="04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f" Dec 04 07:09:08 crc kubenswrapper[4832]: E1204 07:09:08.953820 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f\": container with ID starting with 04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f not found: ID does not exist" containerID="04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.953900 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f"} err="failed to get container status \"04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f\": rpc error: code = NotFound desc = could not find container \"04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f\": container with ID starting with 04aa3ee6479557452a216773b0d80e7cc0678daebcc63443ca315a3b7f99655f not found: ID does not exist" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.953973 4832 scope.go:117] "RemoveContainer" containerID="a7b61c82af6461d63f10618eb598cb80d2a0958a68f4ee9dc3b0a0997c5c05d2" Dec 04 07:09:08 crc kubenswrapper[4832]: E1204 07:09:08.956490 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7b61c82af6461d63f10618eb598cb80d2a0958a68f4ee9dc3b0a0997c5c05d2\": container with ID starting with a7b61c82af6461d63f10618eb598cb80d2a0958a68f4ee9dc3b0a0997c5c05d2 not found: ID does not exist" containerID="a7b61c82af6461d63f10618eb598cb80d2a0958a68f4ee9dc3b0a0997c5c05d2" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.956535 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7b61c82af6461d63f10618eb598cb80d2a0958a68f4ee9dc3b0a0997c5c05d2"} err="failed to get container status \"a7b61c82af6461d63f10618eb598cb80d2a0958a68f4ee9dc3b0a0997c5c05d2\": rpc error: code = NotFound desc = could not find container \"a7b61c82af6461d63f10618eb598cb80d2a0958a68f4ee9dc3b0a0997c5c05d2\": container with ID starting with a7b61c82af6461d63f10618eb598cb80d2a0958a68f4ee9dc3b0a0997c5c05d2 not found: ID does not exist" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.956575 4832 scope.go:117] "RemoveContainer" containerID="f6e0917d3dc2cdb66e9ca03e66bd0e882aa4884a1005b6b8665e5fa9d630ef69" Dec 04 07:09:08 crc kubenswrapper[4832]: E1204 07:09:08.957063 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e0917d3dc2cdb66e9ca03e66bd0e882aa4884a1005b6b8665e5fa9d630ef69\": container with ID starting with f6e0917d3dc2cdb66e9ca03e66bd0e882aa4884a1005b6b8665e5fa9d630ef69 not found: ID does not exist" containerID="f6e0917d3dc2cdb66e9ca03e66bd0e882aa4884a1005b6b8665e5fa9d630ef69" Dec 04 07:09:08 crc kubenswrapper[4832]: I1204 07:09:08.957085 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e0917d3dc2cdb66e9ca03e66bd0e882aa4884a1005b6b8665e5fa9d630ef69"} err="failed to get container status \"f6e0917d3dc2cdb66e9ca03e66bd0e882aa4884a1005b6b8665e5fa9d630ef69\": rpc error: code = NotFound desc = could not find container \"f6e0917d3dc2cdb66e9ca03e66bd0e882aa4884a1005b6b8665e5fa9d630ef69\": container with ID starting with f6e0917d3dc2cdb66e9ca03e66bd0e882aa4884a1005b6b8665e5fa9d630ef69 not found: ID does not exist" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.178832 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.245157 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a2318a-ae04-4bcd-a180-fa36f725d4a1-utilities\") pod \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\" (UID: \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\") " Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.245477 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a2318a-ae04-4bcd-a180-fa36f725d4a1-catalog-content\") pod \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\" (UID: \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\") " Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.245514 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxgn8\" (UniqueName: \"kubernetes.io/projected/12a2318a-ae04-4bcd-a180-fa36f725d4a1-kube-api-access-kxgn8\") pod \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\" (UID: \"12a2318a-ae04-4bcd-a180-fa36f725d4a1\") " Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.246321 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a2318a-ae04-4bcd-a180-fa36f725d4a1-utilities" (OuterVolumeSpecName: "utilities") pod "12a2318a-ae04-4bcd-a180-fa36f725d4a1" (UID: "12a2318a-ae04-4bcd-a180-fa36f725d4a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.286613 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a2318a-ae04-4bcd-a180-fa36f725d4a1-kube-api-access-kxgn8" (OuterVolumeSpecName: "kube-api-access-kxgn8") pod "12a2318a-ae04-4bcd-a180-fa36f725d4a1" (UID: "12a2318a-ae04-4bcd-a180-fa36f725d4a1"). InnerVolumeSpecName "kube-api-access-kxgn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.348518 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxgn8\" (UniqueName: \"kubernetes.io/projected/12a2318a-ae04-4bcd-a180-fa36f725d4a1-kube-api-access-kxgn8\") on node \"crc\" DevicePath \"\"" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.348559 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12a2318a-ae04-4bcd-a180-fa36f725d4a1-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.365459 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12a2318a-ae04-4bcd-a180-fa36f725d4a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12a2318a-ae04-4bcd-a180-fa36f725d4a1" (UID: "12a2318a-ae04-4bcd-a180-fa36f725d4a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.450619 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12a2318a-ae04-4bcd-a180-fa36f725d4a1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.713491 4832 generic.go:334] "Generic (PLEG): container finished" podID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" containerID="da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a" exitCode=0 Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.713540 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqlgj" event={"ID":"12a2318a-ae04-4bcd-a180-fa36f725d4a1","Type":"ContainerDied","Data":"da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a"} Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.713579 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqlgj" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.713619 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqlgj" event={"ID":"12a2318a-ae04-4bcd-a180-fa36f725d4a1","Type":"ContainerDied","Data":"66ef581b3b744ebf4a90fee9f72843648c2d78a2e7e5e84e5599a5ff366af463"} Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.713648 4832 scope.go:117] "RemoveContainer" containerID="da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.750152 4832 scope.go:117] "RemoveContainer" containerID="a21f82fe193045960dfd4d5d952b4c8f50cf7fd53da2a9f80b71d941affd417e" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.759276 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqlgj"] Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.776291 4832 scope.go:117] "RemoveContainer" containerID="af04836a0da92676efb89264fdb29ed3ba979fe87615df337a13ba50276ca3ca" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.779697 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qqlgj"] Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.819895 4832 scope.go:117] "RemoveContainer" containerID="da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a" Dec 04 07:09:09 crc kubenswrapper[4832]: E1204 07:09:09.820752 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a\": container with ID starting with da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a not found: ID does not exist" containerID="da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.820796 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a"} err="failed to get container status \"da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a\": rpc error: code = NotFound desc = could not find container \"da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a\": container with ID starting with da7d3be83e5000f3bd4b22e35e9f9d18fe7ca5c7113f16a1c348546e319edd9a not found: ID does not exist" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.820833 4832 scope.go:117] "RemoveContainer" containerID="a21f82fe193045960dfd4d5d952b4c8f50cf7fd53da2a9f80b71d941affd417e" Dec 04 07:09:09 crc kubenswrapper[4832]: E1204 07:09:09.821560 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a21f82fe193045960dfd4d5d952b4c8f50cf7fd53da2a9f80b71d941affd417e\": container with ID starting with a21f82fe193045960dfd4d5d952b4c8f50cf7fd53da2a9f80b71d941affd417e not found: ID does not exist" containerID="a21f82fe193045960dfd4d5d952b4c8f50cf7fd53da2a9f80b71d941affd417e" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.821628 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21f82fe193045960dfd4d5d952b4c8f50cf7fd53da2a9f80b71d941affd417e"} err="failed to get container status \"a21f82fe193045960dfd4d5d952b4c8f50cf7fd53da2a9f80b71d941affd417e\": rpc error: code = NotFound desc = could not find container \"a21f82fe193045960dfd4d5d952b4c8f50cf7fd53da2a9f80b71d941affd417e\": container with ID starting with a21f82fe193045960dfd4d5d952b4c8f50cf7fd53da2a9f80b71d941affd417e not found: ID does not exist" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.821676 4832 scope.go:117] "RemoveContainer" containerID="af04836a0da92676efb89264fdb29ed3ba979fe87615df337a13ba50276ca3ca" Dec 04 07:09:09 crc kubenswrapper[4832]: E1204 07:09:09.824038 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af04836a0da92676efb89264fdb29ed3ba979fe87615df337a13ba50276ca3ca\": container with ID starting with af04836a0da92676efb89264fdb29ed3ba979fe87615df337a13ba50276ca3ca not found: ID does not exist" containerID="af04836a0da92676efb89264fdb29ed3ba979fe87615df337a13ba50276ca3ca" Dec 04 07:09:09 crc kubenswrapper[4832]: I1204 07:09:09.824111 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af04836a0da92676efb89264fdb29ed3ba979fe87615df337a13ba50276ca3ca"} err="failed to get container status \"af04836a0da92676efb89264fdb29ed3ba979fe87615df337a13ba50276ca3ca\": rpc error: code = NotFound desc = could not find container \"af04836a0da92676efb89264fdb29ed3ba979fe87615df337a13ba50276ca3ca\": container with ID starting with af04836a0da92676efb89264fdb29ed3ba979fe87615df337a13ba50276ca3ca not found: ID does not exist" Dec 04 07:09:10 crc kubenswrapper[4832]: I1204 07:09:10.711122 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:09:10 crc kubenswrapper[4832]: E1204 07:09:10.711916 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:09:10 crc kubenswrapper[4832]: I1204 07:09:10.724307 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" path="/var/lib/kubelet/pods/12a2318a-ae04-4bcd-a180-fa36f725d4a1/volumes" Dec 04 07:09:10 crc kubenswrapper[4832]: I1204 07:09:10.725064 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" path="/var/lib/kubelet/pods/f9785194-82e5-4e80-bcd9-eaeaa76b9bbc/volumes" Dec 04 07:09:21 crc kubenswrapper[4832]: I1204 07:09:21.275356 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zzf4r_d35e6baa-6315-48ee-904c-05da7d436283/control-plane-machine-set-operator/0.log" Dec 04 07:09:21 crc kubenswrapper[4832]: I1204 07:09:21.441698 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gcvsv_8b214f93-e9ab-4500-9c6b-6319c5570459/kube-rbac-proxy/0.log" Dec 04 07:09:21 crc kubenswrapper[4832]: I1204 07:09:21.471897 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gcvsv_8b214f93-e9ab-4500-9c6b-6319c5570459/machine-api-operator/0.log" Dec 04 07:09:23 crc kubenswrapper[4832]: I1204 07:09:23.711063 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:09:23 crc kubenswrapper[4832]: E1204 07:09:23.712494 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:09:34 crc kubenswrapper[4832]: I1204 07:09:34.502608 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-7cv2p_982879a5-56a8-46a1-ac5f-73023f9a1ddc/cert-manager-controller/0.log" Dec 04 07:09:34 crc kubenswrapper[4832]: I1204 07:09:34.695610 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-8smzg_ce0aa020-53b7-4687-b620-659e270dbcc3/cert-manager-cainjector/0.log" Dec 04 07:09:34 crc kubenswrapper[4832]: I1204 07:09:34.724669 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-jkfns_801084d1-2568-40d3-b9a1-3f3d43cecdea/cert-manager-webhook/0.log" Dec 04 07:09:36 crc kubenswrapper[4832]: I1204 07:09:36.711789 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:09:36 crc kubenswrapper[4832]: E1204 07:09:36.712618 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:09:46 crc kubenswrapper[4832]: I1204 07:09:46.778038 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-jfb7v_85a05826-c1ab-484b-b658-051dc78add17/nmstate-console-plugin/0.log" Dec 04 07:09:46 crc kubenswrapper[4832]: I1204 07:09:46.930348 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4qw9l_3d1046ad-79df-4e1c-8c25-6af2a0379417/nmstate-handler/0.log" Dec 04 07:09:47 crc kubenswrapper[4832]: I1204 07:09:47.007539 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-gcvj4_a6d2dc02-8689-4c6b-bde6-f9120db9f714/kube-rbac-proxy/0.log" Dec 04 07:09:47 crc kubenswrapper[4832]: I1204 07:09:47.054350 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-gcvj4_a6d2dc02-8689-4c6b-bde6-f9120db9f714/nmstate-metrics/0.log" Dec 04 07:09:47 crc kubenswrapper[4832]: I1204 07:09:47.195467 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-rqknt_2a85a45c-df69-4030-af49-e7f2bb0b755e/nmstate-operator/0.log" Dec 04 07:09:47 crc kubenswrapper[4832]: I1204 07:09:47.320956 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-6ntwk_097d6138-4a11-4545-bb6e-a61ea6cff7fb/nmstate-webhook/0.log" Dec 04 07:09:47 crc kubenswrapper[4832]: I1204 07:09:47.711428 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:09:47 crc kubenswrapper[4832]: E1204 07:09:47.711770 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:09:58 crc kubenswrapper[4832]: I1204 07:09:58.710789 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:09:58 crc kubenswrapper[4832]: E1204 07:09:58.711739 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:10:02 crc kubenswrapper[4832]: I1204 07:10:02.243338 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-gclzl_a1b7280c-f3d1-4f5b-9f14-bf413e597077/kube-rbac-proxy/0.log" Dec 04 07:10:02 crc kubenswrapper[4832]: I1204 07:10:02.382543 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-gclzl_a1b7280c-f3d1-4f5b-9f14-bf413e597077/controller/0.log" Dec 04 07:10:02 crc kubenswrapper[4832]: I1204 07:10:02.449435 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-frr-files/0.log" Dec 04 07:10:02 crc kubenswrapper[4832]: I1204 07:10:02.623833 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-frr-files/0.log" Dec 04 07:10:02 crc kubenswrapper[4832]: I1204 07:10:02.655366 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-metrics/0.log" Dec 04 07:10:02 crc kubenswrapper[4832]: I1204 07:10:02.656298 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-reloader/0.log" Dec 04 07:10:02 crc kubenswrapper[4832]: I1204 07:10:02.708901 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-reloader/0.log" Dec 04 07:10:02 crc kubenswrapper[4832]: I1204 07:10:02.891484 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-frr-files/0.log" Dec 04 07:10:02 crc kubenswrapper[4832]: I1204 07:10:02.892727 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-metrics/0.log" Dec 04 07:10:02 crc kubenswrapper[4832]: I1204 07:10:02.949638 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-reloader/0.log" Dec 04 07:10:02 crc kubenswrapper[4832]: I1204 07:10:02.969186 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-metrics/0.log" Dec 04 07:10:03 crc kubenswrapper[4832]: I1204 07:10:03.144814 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-metrics/0.log" Dec 04 07:10:03 crc kubenswrapper[4832]: I1204 07:10:03.146289 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-frr-files/0.log" Dec 04 07:10:03 crc kubenswrapper[4832]: I1204 07:10:03.173141 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-reloader/0.log" Dec 04 07:10:03 crc kubenswrapper[4832]: I1204 07:10:03.181649 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/controller/0.log" Dec 04 07:10:03 crc kubenswrapper[4832]: I1204 07:10:03.419764 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/kube-rbac-proxy-frr/0.log" Dec 04 07:10:03 crc kubenswrapper[4832]: I1204 07:10:03.422478 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/kube-rbac-proxy/0.log" Dec 04 07:10:03 crc kubenswrapper[4832]: I1204 07:10:03.443110 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/frr-metrics/0.log" Dec 04 07:10:03 crc kubenswrapper[4832]: I1204 07:10:03.693964 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-9h2b6_9e61f6af-2150-458f-9ace-ce824ac50448/frr-k8s-webhook-server/0.log" Dec 04 07:10:03 crc kubenswrapper[4832]: I1204 07:10:03.712499 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/reloader/0.log" Dec 04 07:10:03 crc kubenswrapper[4832]: I1204 07:10:03.910911 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5dff6547bc-rp4gv_59a7f669-83c5-454f-a192-94642ab2fe06/manager/0.log" Dec 04 07:10:04 crc kubenswrapper[4832]: I1204 07:10:04.196586 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f45496cc4-r8fz4_6c20405b-b33f-49ad-a10f-a9b32a3d320b/webhook-server/0.log" Dec 04 07:10:04 crc kubenswrapper[4832]: I1204 07:10:04.367696 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cwbkx_3a0011d7-d649-42fa-bd27-b98eb4a958a3/kube-rbac-proxy/0.log" Dec 04 07:10:04 crc kubenswrapper[4832]: I1204 07:10:04.966622 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cwbkx_3a0011d7-d649-42fa-bd27-b98eb4a958a3/speaker/0.log" Dec 04 07:10:05 crc kubenswrapper[4832]: I1204 07:10:05.170236 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/frr/0.log" Dec 04 07:10:11 crc kubenswrapper[4832]: I1204 07:10:11.711790 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:10:11 crc kubenswrapper[4832]: E1204 07:10:11.712857 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:10:18 crc kubenswrapper[4832]: I1204 07:10:18.220677 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/util/0.log" Dec 04 07:10:18 crc kubenswrapper[4832]: I1204 07:10:18.495993 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/util/0.log" Dec 04 07:10:18 crc kubenswrapper[4832]: I1204 07:10:18.522987 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/pull/0.log" Dec 04 07:10:18 crc kubenswrapper[4832]: I1204 07:10:18.530131 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/pull/0.log" Dec 04 07:10:18 crc kubenswrapper[4832]: I1204 07:10:18.689872 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/util/0.log" Dec 04 07:10:18 crc kubenswrapper[4832]: I1204 07:10:18.746198 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/extract/0.log" Dec 04 07:10:18 crc kubenswrapper[4832]: I1204 07:10:18.759350 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/pull/0.log" Dec 04 07:10:18 crc kubenswrapper[4832]: I1204 07:10:18.888757 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/util/0.log" Dec 04 07:10:19 crc kubenswrapper[4832]: I1204 07:10:19.057728 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/pull/0.log" Dec 04 07:10:19 crc kubenswrapper[4832]: I1204 07:10:19.096347 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/util/0.log" Dec 04 07:10:19 crc kubenswrapper[4832]: I1204 07:10:19.097602 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/pull/0.log" Dec 04 07:10:19 crc kubenswrapper[4832]: I1204 07:10:19.300995 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/util/0.log" Dec 04 07:10:19 crc kubenswrapper[4832]: I1204 07:10:19.321275 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/extract/0.log" Dec 04 07:10:19 crc kubenswrapper[4832]: I1204 07:10:19.362926 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/pull/0.log" Dec 04 07:10:19 crc kubenswrapper[4832]: I1204 07:10:19.501717 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/extract-utilities/0.log" Dec 04 07:10:19 crc kubenswrapper[4832]: I1204 07:10:19.689477 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/extract-content/0.log" Dec 04 07:10:19 crc kubenswrapper[4832]: I1204 07:10:19.692864 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/extract-utilities/0.log" Dec 04 07:10:19 crc kubenswrapper[4832]: I1204 07:10:19.742019 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/extract-content/0.log" Dec 04 07:10:19 crc kubenswrapper[4832]: I1204 07:10:19.992576 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/extract-utilities/0.log" Dec 04 07:10:20 crc kubenswrapper[4832]: I1204 07:10:20.002537 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/extract-content/0.log" Dec 04 07:10:20 crc kubenswrapper[4832]: I1204 07:10:20.241495 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/extract-utilities/0.log" Dec 04 07:10:20 crc kubenswrapper[4832]: I1204 07:10:20.475129 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/extract-content/0.log" Dec 04 07:10:20 crc kubenswrapper[4832]: I1204 07:10:20.535712 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/extract-utilities/0.log" Dec 04 07:10:20 crc kubenswrapper[4832]: I1204 07:10:20.536604 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/registry-server/0.log" Dec 04 07:10:20 crc kubenswrapper[4832]: I1204 07:10:20.648055 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/extract-content/0.log" Dec 04 07:10:20 crc kubenswrapper[4832]: I1204 07:10:20.803014 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/extract-content/0.log" Dec 04 07:10:20 crc kubenswrapper[4832]: I1204 07:10:20.811846 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/extract-utilities/0.log" Dec 04 07:10:21 crc kubenswrapper[4832]: I1204 07:10:21.235130 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-q8xv8_d5e811d7-d4fd-4504-b6d0-8d653628465d/marketplace-operator/0.log" Dec 04 07:10:21 crc kubenswrapper[4832]: I1204 07:10:21.244240 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/extract-utilities/0.log" Dec 04 07:10:21 crc kubenswrapper[4832]: I1204 07:10:21.426344 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/extract-utilities/0.log" Dec 04 07:10:21 crc kubenswrapper[4832]: I1204 07:10:21.471152 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/extract-content/0.log" Dec 04 07:10:21 crc kubenswrapper[4832]: I1204 07:10:21.476217 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/extract-content/0.log" Dec 04 07:10:21 crc kubenswrapper[4832]: I1204 07:10:21.694677 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/extract-utilities/0.log" Dec 04 07:10:21 crc kubenswrapper[4832]: I1204 07:10:21.736602 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/extract-content/0.log" Dec 04 07:10:21 crc kubenswrapper[4832]: I1204 07:10:21.790257 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/registry-server/0.log" Dec 04 07:10:21 crc kubenswrapper[4832]: I1204 07:10:21.909205 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/registry-server/0.log" Dec 04 07:10:21 crc kubenswrapper[4832]: I1204 07:10:21.977800 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/extract-utilities/0.log" Dec 04 07:10:22 crc kubenswrapper[4832]: I1204 07:10:22.185465 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/extract-content/0.log" Dec 04 07:10:22 crc kubenswrapper[4832]: I1204 07:10:22.201014 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/extract-content/0.log" Dec 04 07:10:22 crc kubenswrapper[4832]: I1204 07:10:22.228329 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/extract-utilities/0.log" Dec 04 07:10:22 crc kubenswrapper[4832]: I1204 07:10:22.364732 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/extract-utilities/0.log" Dec 04 07:10:22 crc kubenswrapper[4832]: I1204 07:10:22.387940 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/extract-content/0.log" Dec 04 07:10:22 crc kubenswrapper[4832]: I1204 07:10:22.902955 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/registry-server/0.log" Dec 04 07:10:24 crc kubenswrapper[4832]: I1204 07:10:24.719477 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:10:24 crc kubenswrapper[4832]: E1204 07:10:24.720137 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:10:24 crc kubenswrapper[4832]: I1204 07:10:24.966308 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bh47q"] Dec 04 07:10:24 crc kubenswrapper[4832]: E1204 07:10:24.968007 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" containerName="extract-utilities" Dec 04 07:10:24 crc kubenswrapper[4832]: I1204 07:10:24.968169 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" containerName="extract-utilities" Dec 04 07:10:24 crc kubenswrapper[4832]: E1204 07:10:24.968319 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" containerName="registry-server" Dec 04 07:10:24 crc kubenswrapper[4832]: I1204 07:10:24.968479 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" containerName="registry-server" Dec 04 07:10:24 crc kubenswrapper[4832]: E1204 07:10:24.968632 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" containerName="extract-utilities" Dec 04 07:10:24 crc kubenswrapper[4832]: I1204 07:10:24.968756 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" containerName="extract-utilities" Dec 04 07:10:24 crc kubenswrapper[4832]: E1204 07:10:24.968890 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" containerName="extract-content" Dec 04 07:10:24 crc kubenswrapper[4832]: I1204 07:10:24.969000 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" containerName="extract-content" Dec 04 07:10:24 crc kubenswrapper[4832]: E1204 07:10:24.969119 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" containerName="registry-server" Dec 04 07:10:24 crc kubenswrapper[4832]: I1204 07:10:24.969242 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" containerName="registry-server" Dec 04 07:10:24 crc kubenswrapper[4832]: E1204 07:10:24.969715 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" containerName="extract-content" Dec 04 07:10:24 crc kubenswrapper[4832]: I1204 07:10:24.969872 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" containerName="extract-content" Dec 04 07:10:24 crc kubenswrapper[4832]: I1204 07:10:24.971079 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a2318a-ae04-4bcd-a180-fa36f725d4a1" containerName="registry-server" Dec 04 07:10:24 crc kubenswrapper[4832]: I1204 07:10:24.971298 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9785194-82e5-4e80-bcd9-eaeaa76b9bbc" containerName="registry-server" Dec 04 07:10:24 crc kubenswrapper[4832]: I1204 07:10:24.974577 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:24 crc kubenswrapper[4832]: I1204 07:10:24.994409 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bh47q"] Dec 04 07:10:25 crc kubenswrapper[4832]: I1204 07:10:25.086020 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fd26d49-3369-4fc8-b8fa-a2f879576266-catalog-content\") pod \"certified-operators-bh47q\" (UID: \"1fd26d49-3369-4fc8-b8fa-a2f879576266\") " pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:25 crc kubenswrapper[4832]: I1204 07:10:25.086123 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngdc2\" (UniqueName: \"kubernetes.io/projected/1fd26d49-3369-4fc8-b8fa-a2f879576266-kube-api-access-ngdc2\") pod \"certified-operators-bh47q\" (UID: \"1fd26d49-3369-4fc8-b8fa-a2f879576266\") " pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:25 crc kubenswrapper[4832]: I1204 07:10:25.086176 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fd26d49-3369-4fc8-b8fa-a2f879576266-utilities\") pod \"certified-operators-bh47q\" (UID: \"1fd26d49-3369-4fc8-b8fa-a2f879576266\") " pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:25 crc kubenswrapper[4832]: I1204 07:10:25.188453 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngdc2\" (UniqueName: \"kubernetes.io/projected/1fd26d49-3369-4fc8-b8fa-a2f879576266-kube-api-access-ngdc2\") pod \"certified-operators-bh47q\" (UID: \"1fd26d49-3369-4fc8-b8fa-a2f879576266\") " pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:25 crc kubenswrapper[4832]: I1204 07:10:25.188535 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fd26d49-3369-4fc8-b8fa-a2f879576266-utilities\") pod \"certified-operators-bh47q\" (UID: \"1fd26d49-3369-4fc8-b8fa-a2f879576266\") " pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:25 crc kubenswrapper[4832]: I1204 07:10:25.188659 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fd26d49-3369-4fc8-b8fa-a2f879576266-catalog-content\") pod \"certified-operators-bh47q\" (UID: \"1fd26d49-3369-4fc8-b8fa-a2f879576266\") " pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:25 crc kubenswrapper[4832]: I1204 07:10:25.189331 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fd26d49-3369-4fc8-b8fa-a2f879576266-catalog-content\") pod \"certified-operators-bh47q\" (UID: \"1fd26d49-3369-4fc8-b8fa-a2f879576266\") " pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:25 crc kubenswrapper[4832]: I1204 07:10:25.189469 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fd26d49-3369-4fc8-b8fa-a2f879576266-utilities\") pod \"certified-operators-bh47q\" (UID: \"1fd26d49-3369-4fc8-b8fa-a2f879576266\") " pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:25 crc kubenswrapper[4832]: I1204 07:10:25.214968 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngdc2\" (UniqueName: \"kubernetes.io/projected/1fd26d49-3369-4fc8-b8fa-a2f879576266-kube-api-access-ngdc2\") pod \"certified-operators-bh47q\" (UID: \"1fd26d49-3369-4fc8-b8fa-a2f879576266\") " pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:25 crc kubenswrapper[4832]: I1204 07:10:25.304122 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:25 crc kubenswrapper[4832]: I1204 07:10:25.705149 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bh47q"] Dec 04 07:10:26 crc kubenswrapper[4832]: I1204 07:10:26.614614 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fd26d49-3369-4fc8-b8fa-a2f879576266" containerID="dcaafd11fa13acb9e337036a523d1421cff3c8a7eda35fa679e75f58eea94f09" exitCode=0 Dec 04 07:10:26 crc kubenswrapper[4832]: I1204 07:10:26.614720 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh47q" event={"ID":"1fd26d49-3369-4fc8-b8fa-a2f879576266","Type":"ContainerDied","Data":"dcaafd11fa13acb9e337036a523d1421cff3c8a7eda35fa679e75f58eea94f09"} Dec 04 07:10:26 crc kubenswrapper[4832]: I1204 07:10:26.614960 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh47q" event={"ID":"1fd26d49-3369-4fc8-b8fa-a2f879576266","Type":"ContainerStarted","Data":"786af406baea1fa17b0716db904f9ed32a94bf080dd68c5a9745d615908bcce1"} Dec 04 07:10:27 crc kubenswrapper[4832]: I1204 07:10:27.626347 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh47q" event={"ID":"1fd26d49-3369-4fc8-b8fa-a2f879576266","Type":"ContainerStarted","Data":"0cf26fb986fdc07445588266e9868a941a9d2cb31e998ad6acb13fbd1123c5bb"} Dec 04 07:10:28 crc kubenswrapper[4832]: I1204 07:10:28.643173 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fd26d49-3369-4fc8-b8fa-a2f879576266" containerID="0cf26fb986fdc07445588266e9868a941a9d2cb31e998ad6acb13fbd1123c5bb" exitCode=0 Dec 04 07:10:28 crc kubenswrapper[4832]: I1204 07:10:28.643863 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh47q" event={"ID":"1fd26d49-3369-4fc8-b8fa-a2f879576266","Type":"ContainerDied","Data":"0cf26fb986fdc07445588266e9868a941a9d2cb31e998ad6acb13fbd1123c5bb"} Dec 04 07:10:29 crc kubenswrapper[4832]: I1204 07:10:29.655208 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh47q" event={"ID":"1fd26d49-3369-4fc8-b8fa-a2f879576266","Type":"ContainerStarted","Data":"37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd"} Dec 04 07:10:29 crc kubenswrapper[4832]: I1204 07:10:29.684444 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bh47q" podStartSLOduration=3.209156724 podStartE2EDuration="5.684417839s" podCreationTimestamp="2025-12-04 07:10:24 +0000 UTC" firstStartedPulling="2025-12-04 07:10:26.617060301 +0000 UTC m=+3682.229878007" lastFinishedPulling="2025-12-04 07:10:29.092321416 +0000 UTC m=+3684.705139122" observedRunningTime="2025-12-04 07:10:29.675546682 +0000 UTC m=+3685.288364408" watchObservedRunningTime="2025-12-04 07:10:29.684417839 +0000 UTC m=+3685.297235545" Dec 04 07:10:35 crc kubenswrapper[4832]: I1204 07:10:35.304737 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:35 crc kubenswrapper[4832]: I1204 07:10:35.305334 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:35 crc kubenswrapper[4832]: I1204 07:10:35.354640 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:35 crc kubenswrapper[4832]: I1204 07:10:35.762342 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:36 crc kubenswrapper[4832]: I1204 07:10:36.139019 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bh47q"] Dec 04 07:10:37 crc kubenswrapper[4832]: I1204 07:10:37.734084 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bh47q" podUID="1fd26d49-3369-4fc8-b8fa-a2f879576266" containerName="registry-server" containerID="cri-o://37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd" gracePeriod=2 Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.321942 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.395734 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngdc2\" (UniqueName: \"kubernetes.io/projected/1fd26d49-3369-4fc8-b8fa-a2f879576266-kube-api-access-ngdc2\") pod \"1fd26d49-3369-4fc8-b8fa-a2f879576266\" (UID: \"1fd26d49-3369-4fc8-b8fa-a2f879576266\") " Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.396126 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fd26d49-3369-4fc8-b8fa-a2f879576266-utilities\") pod \"1fd26d49-3369-4fc8-b8fa-a2f879576266\" (UID: \"1fd26d49-3369-4fc8-b8fa-a2f879576266\") " Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.396329 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fd26d49-3369-4fc8-b8fa-a2f879576266-catalog-content\") pod \"1fd26d49-3369-4fc8-b8fa-a2f879576266\" (UID: \"1fd26d49-3369-4fc8-b8fa-a2f879576266\") " Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.399506 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fd26d49-3369-4fc8-b8fa-a2f879576266-utilities" (OuterVolumeSpecName: "utilities") pod "1fd26d49-3369-4fc8-b8fa-a2f879576266" (UID: "1fd26d49-3369-4fc8-b8fa-a2f879576266"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.419273 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd26d49-3369-4fc8-b8fa-a2f879576266-kube-api-access-ngdc2" (OuterVolumeSpecName: "kube-api-access-ngdc2") pod "1fd26d49-3369-4fc8-b8fa-a2f879576266" (UID: "1fd26d49-3369-4fc8-b8fa-a2f879576266"). InnerVolumeSpecName "kube-api-access-ngdc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.442997 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fd26d49-3369-4fc8-b8fa-a2f879576266-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fd26d49-3369-4fc8-b8fa-a2f879576266" (UID: "1fd26d49-3369-4fc8-b8fa-a2f879576266"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.500375 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fd26d49-3369-4fc8-b8fa-a2f879576266-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.500461 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngdc2\" (UniqueName: \"kubernetes.io/projected/1fd26d49-3369-4fc8-b8fa-a2f879576266-kube-api-access-ngdc2\") on node \"crc\" DevicePath \"\"" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.500486 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fd26d49-3369-4fc8-b8fa-a2f879576266-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.765608 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fd26d49-3369-4fc8-b8fa-a2f879576266" containerID="37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd" exitCode=0 Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.765657 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh47q" event={"ID":"1fd26d49-3369-4fc8-b8fa-a2f879576266","Type":"ContainerDied","Data":"37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd"} Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.765689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh47q" event={"ID":"1fd26d49-3369-4fc8-b8fa-a2f879576266","Type":"ContainerDied","Data":"786af406baea1fa17b0716db904f9ed32a94bf080dd68c5a9745d615908bcce1"} Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.765716 4832 scope.go:117] "RemoveContainer" containerID="37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.765881 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh47q" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.796747 4832 scope.go:117] "RemoveContainer" containerID="0cf26fb986fdc07445588266e9868a941a9d2cb31e998ad6acb13fbd1123c5bb" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.823442 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bh47q"] Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.829285 4832 scope.go:117] "RemoveContainer" containerID="dcaafd11fa13acb9e337036a523d1421cff3c8a7eda35fa679e75f58eea94f09" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.843346 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bh47q"] Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.876218 4832 scope.go:117] "RemoveContainer" containerID="37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd" Dec 04 07:10:38 crc kubenswrapper[4832]: E1204 07:10:38.877007 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd\": container with ID starting with 37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd not found: ID does not exist" containerID="37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.877062 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd"} err="failed to get container status \"37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd\": rpc error: code = NotFound desc = could not find container \"37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd\": container with ID starting with 37ba7efe17861c77ce91c9006f1b965c7d57c519cc0ff6299546022207eb3afd not found: ID does not exist" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.877102 4832 scope.go:117] "RemoveContainer" containerID="0cf26fb986fdc07445588266e9868a941a9d2cb31e998ad6acb13fbd1123c5bb" Dec 04 07:10:38 crc kubenswrapper[4832]: E1204 07:10:38.877669 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cf26fb986fdc07445588266e9868a941a9d2cb31e998ad6acb13fbd1123c5bb\": container with ID starting with 0cf26fb986fdc07445588266e9868a941a9d2cb31e998ad6acb13fbd1123c5bb not found: ID does not exist" containerID="0cf26fb986fdc07445588266e9868a941a9d2cb31e998ad6acb13fbd1123c5bb" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.877731 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cf26fb986fdc07445588266e9868a941a9d2cb31e998ad6acb13fbd1123c5bb"} err="failed to get container status \"0cf26fb986fdc07445588266e9868a941a9d2cb31e998ad6acb13fbd1123c5bb\": rpc error: code = NotFound desc = could not find container \"0cf26fb986fdc07445588266e9868a941a9d2cb31e998ad6acb13fbd1123c5bb\": container with ID starting with 0cf26fb986fdc07445588266e9868a941a9d2cb31e998ad6acb13fbd1123c5bb not found: ID does not exist" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.877763 4832 scope.go:117] "RemoveContainer" containerID="dcaafd11fa13acb9e337036a523d1421cff3c8a7eda35fa679e75f58eea94f09" Dec 04 07:10:38 crc kubenswrapper[4832]: E1204 07:10:38.878136 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcaafd11fa13acb9e337036a523d1421cff3c8a7eda35fa679e75f58eea94f09\": container with ID starting with dcaafd11fa13acb9e337036a523d1421cff3c8a7eda35fa679e75f58eea94f09 not found: ID does not exist" containerID="dcaafd11fa13acb9e337036a523d1421cff3c8a7eda35fa679e75f58eea94f09" Dec 04 07:10:38 crc kubenswrapper[4832]: I1204 07:10:38.878164 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcaafd11fa13acb9e337036a523d1421cff3c8a7eda35fa679e75f58eea94f09"} err="failed to get container status \"dcaafd11fa13acb9e337036a523d1421cff3c8a7eda35fa679e75f58eea94f09\": rpc error: code = NotFound desc = could not find container \"dcaafd11fa13acb9e337036a523d1421cff3c8a7eda35fa679e75f58eea94f09\": container with ID starting with dcaafd11fa13acb9e337036a523d1421cff3c8a7eda35fa679e75f58eea94f09 not found: ID does not exist" Dec 04 07:10:39 crc kubenswrapper[4832]: I1204 07:10:39.711163 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:10:39 crc kubenswrapper[4832]: E1204 07:10:39.711539 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:10:40 crc kubenswrapper[4832]: I1204 07:10:40.721918 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd26d49-3369-4fc8-b8fa-a2f879576266" path="/var/lib/kubelet/pods/1fd26d49-3369-4fc8-b8fa-a2f879576266/volumes" Dec 04 07:10:51 crc kubenswrapper[4832]: I1204 07:10:51.710882 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:10:51 crc kubenswrapper[4832]: E1204 07:10:51.712093 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:11:02 crc kubenswrapper[4832]: I1204 07:11:02.711057 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:11:02 crc kubenswrapper[4832]: E1204 07:11:02.711964 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:11:15 crc kubenswrapper[4832]: I1204 07:11:15.729175 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:11:15 crc kubenswrapper[4832]: E1204 07:11:15.730845 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:11:28 crc kubenswrapper[4832]: I1204 07:11:28.710746 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:11:28 crc kubenswrapper[4832]: E1204 07:11:28.711614 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:11:42 crc kubenswrapper[4832]: I1204 07:11:42.711270 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:11:42 crc kubenswrapper[4832]: E1204 07:11:42.712094 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:11:53 crc kubenswrapper[4832]: I1204 07:11:53.711338 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:11:53 crc kubenswrapper[4832]: E1204 07:11:53.714935 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:12:05 crc kubenswrapper[4832]: I1204 07:12:05.712124 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:12:05 crc kubenswrapper[4832]: E1204 07:12:05.713430 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:12:10 crc kubenswrapper[4832]: I1204 07:12:10.761229 4832 generic.go:334] "Generic (PLEG): container finished" podID="21e66927-9ab9-4f94-9843-02b60fb8041a" containerID="2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4" exitCode=0 Dec 04 07:12:10 crc kubenswrapper[4832]: I1204 07:12:10.761340 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qv5m8/must-gather-2hc97" event={"ID":"21e66927-9ab9-4f94-9843-02b60fb8041a","Type":"ContainerDied","Data":"2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4"} Dec 04 07:12:10 crc kubenswrapper[4832]: I1204 07:12:10.762902 4832 scope.go:117] "RemoveContainer" containerID="2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4" Dec 04 07:12:11 crc kubenswrapper[4832]: I1204 07:12:11.616288 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qv5m8_must-gather-2hc97_21e66927-9ab9-4f94-9843-02b60fb8041a/gather/0.log" Dec 04 07:12:17 crc kubenswrapper[4832]: I1204 07:12:17.711621 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:12:17 crc kubenswrapper[4832]: E1204 07:12:17.712800 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.313527 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qv5m8/must-gather-2hc97"] Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.314333 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qv5m8/must-gather-2hc97" podUID="21e66927-9ab9-4f94-9843-02b60fb8041a" containerName="copy" containerID="cri-o://cd25f17930265752279e18f55645ad0911cd362eaf877a276cae768dcb467265" gracePeriod=2 Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.325084 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qv5m8/must-gather-2hc97"] Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.772207 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qv5m8_must-gather-2hc97_21e66927-9ab9-4f94-9843-02b60fb8041a/copy/0.log" Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.772711 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/must-gather-2hc97" Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.865533 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qv5m8_must-gather-2hc97_21e66927-9ab9-4f94-9843-02b60fb8041a/copy/0.log" Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.866673 4832 generic.go:334] "Generic (PLEG): container finished" podID="21e66927-9ab9-4f94-9843-02b60fb8041a" containerID="cd25f17930265752279e18f55645ad0911cd362eaf877a276cae768dcb467265" exitCode=143 Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.866761 4832 scope.go:117] "RemoveContainer" containerID="cd25f17930265752279e18f55645ad0911cd362eaf877a276cae768dcb467265" Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.866804 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qv5m8/must-gather-2hc97" Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.907392 4832 scope.go:117] "RemoveContainer" containerID="2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4" Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.954368 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21e66927-9ab9-4f94-9843-02b60fb8041a-must-gather-output\") pod \"21e66927-9ab9-4f94-9843-02b60fb8041a\" (UID: \"21e66927-9ab9-4f94-9843-02b60fb8041a\") " Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.954529 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hv78\" (UniqueName: \"kubernetes.io/projected/21e66927-9ab9-4f94-9843-02b60fb8041a-kube-api-access-4hv78\") pod \"21e66927-9ab9-4f94-9843-02b60fb8041a\" (UID: \"21e66927-9ab9-4f94-9843-02b60fb8041a\") " Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.962810 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e66927-9ab9-4f94-9843-02b60fb8041a-kube-api-access-4hv78" (OuterVolumeSpecName: "kube-api-access-4hv78") pod "21e66927-9ab9-4f94-9843-02b60fb8041a" (UID: "21e66927-9ab9-4f94-9843-02b60fb8041a"). InnerVolumeSpecName "kube-api-access-4hv78". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.979378 4832 scope.go:117] "RemoveContainer" containerID="cd25f17930265752279e18f55645ad0911cd362eaf877a276cae768dcb467265" Dec 04 07:12:19 crc kubenswrapper[4832]: E1204 07:12:19.980111 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd25f17930265752279e18f55645ad0911cd362eaf877a276cae768dcb467265\": container with ID starting with cd25f17930265752279e18f55645ad0911cd362eaf877a276cae768dcb467265 not found: ID does not exist" containerID="cd25f17930265752279e18f55645ad0911cd362eaf877a276cae768dcb467265" Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.980169 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd25f17930265752279e18f55645ad0911cd362eaf877a276cae768dcb467265"} err="failed to get container status \"cd25f17930265752279e18f55645ad0911cd362eaf877a276cae768dcb467265\": rpc error: code = NotFound desc = could not find container \"cd25f17930265752279e18f55645ad0911cd362eaf877a276cae768dcb467265\": container with ID starting with cd25f17930265752279e18f55645ad0911cd362eaf877a276cae768dcb467265 not found: ID does not exist" Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.980198 4832 scope.go:117] "RemoveContainer" containerID="2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4" Dec 04 07:12:19 crc kubenswrapper[4832]: E1204 07:12:19.980682 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4\": container with ID starting with 2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4 not found: ID does not exist" containerID="2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4" Dec 04 07:12:19 crc kubenswrapper[4832]: I1204 07:12:19.980718 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4"} err="failed to get container status \"2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4\": rpc error: code = NotFound desc = could not find container \"2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4\": container with ID starting with 2115c592255305009db4729dd74e4d03289c1064907a97b8cde41583f37bc1e4 not found: ID does not exist" Dec 04 07:12:20 crc kubenswrapper[4832]: I1204 07:12:20.060120 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hv78\" (UniqueName: \"kubernetes.io/projected/21e66927-9ab9-4f94-9843-02b60fb8041a-kube-api-access-4hv78\") on node \"crc\" DevicePath \"\"" Dec 04 07:12:20 crc kubenswrapper[4832]: I1204 07:12:20.123904 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21e66927-9ab9-4f94-9843-02b60fb8041a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "21e66927-9ab9-4f94-9843-02b60fb8041a" (UID: "21e66927-9ab9-4f94-9843-02b60fb8041a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:12:20 crc kubenswrapper[4832]: I1204 07:12:20.162948 4832 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21e66927-9ab9-4f94-9843-02b60fb8041a-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 04 07:12:20 crc kubenswrapper[4832]: I1204 07:12:20.723006 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e66927-9ab9-4f94-9843-02b60fb8041a" path="/var/lib/kubelet/pods/21e66927-9ab9-4f94-9843-02b60fb8041a/volumes" Dec 04 07:12:29 crc kubenswrapper[4832]: I1204 07:12:29.711244 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:12:29 crc kubenswrapper[4832]: E1204 07:12:29.712651 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:12:42 crc kubenswrapper[4832]: I1204 07:12:42.710909 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:12:42 crc kubenswrapper[4832]: E1204 07:12:42.712362 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:12:56 crc kubenswrapper[4832]: I1204 07:12:56.711091 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:12:56 crc kubenswrapper[4832]: E1204 07:12:56.712046 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:13:10 crc kubenswrapper[4832]: I1204 07:13:10.711190 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:13:11 crc kubenswrapper[4832]: I1204 07:13:11.441871 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"3fa00dd8f0c116624370f95fff5190bd09266270dbd554276712c48bf66ab985"} Dec 04 07:13:22 crc kubenswrapper[4832]: I1204 07:13:22.325477 4832 scope.go:117] "RemoveContainer" containerID="352d55af81fdb00b6dd6decc96e5ff9c842130939a99b19149206995147d2a24" Dec 04 07:14:22 crc kubenswrapper[4832]: I1204 07:14:22.395509 4832 scope.go:117] "RemoveContainer" containerID="454f025f23db825be994be82f61c782a9d7b42ff41bf3fe2df776b4d953cfcae" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.207636 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd"] Dec 04 07:15:00 crc kubenswrapper[4832]: E1204 07:15:00.208735 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e66927-9ab9-4f94-9843-02b60fb8041a" containerName="gather" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.208751 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e66927-9ab9-4f94-9843-02b60fb8041a" containerName="gather" Dec 04 07:15:00 crc kubenswrapper[4832]: E1204 07:15:00.208769 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd26d49-3369-4fc8-b8fa-a2f879576266" containerName="extract-utilities" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.208776 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd26d49-3369-4fc8-b8fa-a2f879576266" containerName="extract-utilities" Dec 04 07:15:00 crc kubenswrapper[4832]: E1204 07:15:00.208809 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd26d49-3369-4fc8-b8fa-a2f879576266" containerName="extract-content" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.208821 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd26d49-3369-4fc8-b8fa-a2f879576266" containerName="extract-content" Dec 04 07:15:00 crc kubenswrapper[4832]: E1204 07:15:00.208838 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e66927-9ab9-4f94-9843-02b60fb8041a" containerName="copy" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.208845 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e66927-9ab9-4f94-9843-02b60fb8041a" containerName="copy" Dec 04 07:15:00 crc kubenswrapper[4832]: E1204 07:15:00.208882 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd26d49-3369-4fc8-b8fa-a2f879576266" containerName="registry-server" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.208891 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd26d49-3369-4fc8-b8fa-a2f879576266" containerName="registry-server" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.209141 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd26d49-3369-4fc8-b8fa-a2f879576266" containerName="registry-server" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.209160 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e66927-9ab9-4f94-9843-02b60fb8041a" containerName="gather" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.209170 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e66927-9ab9-4f94-9843-02b60fb8041a" containerName="copy" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.210032 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.217861 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.218236 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.224977 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd"] Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.323339 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbwk\" (UniqueName: \"kubernetes.io/projected/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-kube-api-access-bfbwk\") pod \"collect-profiles-29413875-l7zvd\" (UID: \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.323444 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-secret-volume\") pod \"collect-profiles-29413875-l7zvd\" (UID: \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.323484 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-config-volume\") pod \"collect-profiles-29413875-l7zvd\" (UID: \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.425611 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfbwk\" (UniqueName: \"kubernetes.io/projected/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-kube-api-access-bfbwk\") pod \"collect-profiles-29413875-l7zvd\" (UID: \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.425750 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-secret-volume\") pod \"collect-profiles-29413875-l7zvd\" (UID: \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.425798 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-config-volume\") pod \"collect-profiles-29413875-l7zvd\" (UID: \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.427187 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-config-volume\") pod \"collect-profiles-29413875-l7zvd\" (UID: \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.437583 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-secret-volume\") pod \"collect-profiles-29413875-l7zvd\" (UID: \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.444452 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfbwk\" (UniqueName: \"kubernetes.io/projected/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-kube-api-access-bfbwk\") pod \"collect-profiles-29413875-l7zvd\" (UID: \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:00 crc kubenswrapper[4832]: I1204 07:15:00.550685 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:01 crc kubenswrapper[4832]: I1204 07:15:01.008345 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd"] Dec 04 07:15:01 crc kubenswrapper[4832]: I1204 07:15:01.612583 4832 generic.go:334] "Generic (PLEG): container finished" podID="98eaafa7-4d40-4f7d-bc8c-a95f543bd387" containerID="75fc76569bde4c6fdc3754ee4c297c1918beb9e55ff56e8608cacf7375c73217" exitCode=0 Dec 04 07:15:01 crc kubenswrapper[4832]: I1204 07:15:01.612834 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" event={"ID":"98eaafa7-4d40-4f7d-bc8c-a95f543bd387","Type":"ContainerDied","Data":"75fc76569bde4c6fdc3754ee4c297c1918beb9e55ff56e8608cacf7375c73217"} Dec 04 07:15:01 crc kubenswrapper[4832]: I1204 07:15:01.614682 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" event={"ID":"98eaafa7-4d40-4f7d-bc8c-a95f543bd387","Type":"ContainerStarted","Data":"8c9d4b47146afce047290c62aebe71cb55f4cff978b1baf7d2f20ed41f41bd26"} Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.642442 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lkrx7/must-gather-6ctsd"] Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.644697 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/must-gather-6ctsd" Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.647134 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lkrx7"/"kube-root-ca.crt" Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.647281 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lkrx7"/"default-dockercfg-xk2n9" Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.647207 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lkrx7"/"openshift-service-ca.crt" Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.659046 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lkrx7/must-gather-6ctsd"] Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.776066 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84-must-gather-output\") pod \"must-gather-6ctsd\" (UID: \"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84\") " pod="openshift-must-gather-lkrx7/must-gather-6ctsd" Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.776630 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz77c\" (UniqueName: \"kubernetes.io/projected/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84-kube-api-access-gz77c\") pod \"must-gather-6ctsd\" (UID: \"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84\") " pod="openshift-must-gather-lkrx7/must-gather-6ctsd" Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.878512 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz77c\" (UniqueName: \"kubernetes.io/projected/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84-kube-api-access-gz77c\") pod \"must-gather-6ctsd\" (UID: \"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84\") " pod="openshift-must-gather-lkrx7/must-gather-6ctsd" Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.878819 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84-must-gather-output\") pod \"must-gather-6ctsd\" (UID: \"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84\") " pod="openshift-must-gather-lkrx7/must-gather-6ctsd" Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.879524 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84-must-gather-output\") pod \"must-gather-6ctsd\" (UID: \"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84\") " pod="openshift-must-gather-lkrx7/must-gather-6ctsd" Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.902316 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz77c\" (UniqueName: \"kubernetes.io/projected/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84-kube-api-access-gz77c\") pod \"must-gather-6ctsd\" (UID: \"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84\") " pod="openshift-must-gather-lkrx7/must-gather-6ctsd" Dec 04 07:15:02 crc kubenswrapper[4832]: I1204 07:15:02.985198 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/must-gather-6ctsd" Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.106963 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.287077 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfbwk\" (UniqueName: \"kubernetes.io/projected/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-kube-api-access-bfbwk\") pod \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\" (UID: \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\") " Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.287359 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-config-volume\") pod \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\" (UID: \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\") " Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.287417 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-secret-volume\") pod \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\" (UID: \"98eaafa7-4d40-4f7d-bc8c-a95f543bd387\") " Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.287815 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-config-volume" (OuterVolumeSpecName: "config-volume") pod "98eaafa7-4d40-4f7d-bc8c-a95f543bd387" (UID: "98eaafa7-4d40-4f7d-bc8c-a95f543bd387"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.288270 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.293115 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-kube-api-access-bfbwk" (OuterVolumeSpecName: "kube-api-access-bfbwk") pod "98eaafa7-4d40-4f7d-bc8c-a95f543bd387" (UID: "98eaafa7-4d40-4f7d-bc8c-a95f543bd387"). InnerVolumeSpecName "kube-api-access-bfbwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.293941 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "98eaafa7-4d40-4f7d-bc8c-a95f543bd387" (UID: "98eaafa7-4d40-4f7d-bc8c-a95f543bd387"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.390301 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.390336 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfbwk\" (UniqueName: \"kubernetes.io/projected/98eaafa7-4d40-4f7d-bc8c-a95f543bd387-kube-api-access-bfbwk\") on node \"crc\" DevicePath \"\"" Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.469253 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lkrx7/must-gather-6ctsd"] Dec 04 07:15:03 crc kubenswrapper[4832]: W1204 07:15:03.478735 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bf8cdd4_dc37_4fdf_97c7_ab5779457b84.slice/crio-d1b38d316a919740767f9ce5053b333912bce37b70aad08dd4b31ca5d310bf09 WatchSource:0}: Error finding container d1b38d316a919740767f9ce5053b333912bce37b70aad08dd4b31ca5d310bf09: Status 404 returned error can't find the container with id d1b38d316a919740767f9ce5053b333912bce37b70aad08dd4b31ca5d310bf09 Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.639560 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" event={"ID":"98eaafa7-4d40-4f7d-bc8c-a95f543bd387","Type":"ContainerDied","Data":"8c9d4b47146afce047290c62aebe71cb55f4cff978b1baf7d2f20ed41f41bd26"} Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.639958 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c9d4b47146afce047290c62aebe71cb55f4cff978b1baf7d2f20ed41f41bd26" Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.639852 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413875-l7zvd" Dec 04 07:15:03 crc kubenswrapper[4832]: I1204 07:15:03.641138 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkrx7/must-gather-6ctsd" event={"ID":"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84","Type":"ContainerStarted","Data":"d1b38d316a919740767f9ce5053b333912bce37b70aad08dd4b31ca5d310bf09"} Dec 04 07:15:04 crc kubenswrapper[4832]: I1204 07:15:04.183429 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9"] Dec 04 07:15:04 crc kubenswrapper[4832]: I1204 07:15:04.193750 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413830-b4vv9"] Dec 04 07:15:04 crc kubenswrapper[4832]: I1204 07:15:04.708078 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkrx7/must-gather-6ctsd" event={"ID":"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84","Type":"ContainerStarted","Data":"8e055f19ba610d0ee04d93694e423843276e6971fc105c371f0c69807f78852e"} Dec 04 07:15:04 crc kubenswrapper[4832]: I1204 07:15:04.708156 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkrx7/must-gather-6ctsd" event={"ID":"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84","Type":"ContainerStarted","Data":"72f9025f6984752433daf4e879bc3fadd9e9eedb37ca1bc978214ec5b53d19b4"} Dec 04 07:15:04 crc kubenswrapper[4832]: I1204 07:15:04.793401 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lkrx7/must-gather-6ctsd" podStartSLOduration=2.7933546590000002 podStartE2EDuration="2.793354659s" podCreationTimestamp="2025-12-04 07:15:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 07:15:04.744433643 +0000 UTC m=+3960.357251359" watchObservedRunningTime="2025-12-04 07:15:04.793354659 +0000 UTC m=+3960.406172355" Dec 04 07:15:04 crc kubenswrapper[4832]: I1204 07:15:04.853780 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd89528b-85d2-4125-acf4-1a101323819a" path="/var/lib/kubelet/pods/cd89528b-85d2-4125-acf4-1a101323819a/volumes" Dec 04 07:15:07 crc kubenswrapper[4832]: I1204 07:15:07.470964 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lkrx7/crc-debug-d89r2"] Dec 04 07:15:07 crc kubenswrapper[4832]: E1204 07:15:07.472035 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eaafa7-4d40-4f7d-bc8c-a95f543bd387" containerName="collect-profiles" Dec 04 07:15:07 crc kubenswrapper[4832]: I1204 07:15:07.472050 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eaafa7-4d40-4f7d-bc8c-a95f543bd387" containerName="collect-profiles" Dec 04 07:15:07 crc kubenswrapper[4832]: I1204 07:15:07.472261 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="98eaafa7-4d40-4f7d-bc8c-a95f543bd387" containerName="collect-profiles" Dec 04 07:15:07 crc kubenswrapper[4832]: I1204 07:15:07.473056 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/crc-debug-d89r2" Dec 04 07:15:07 crc kubenswrapper[4832]: I1204 07:15:07.621717 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb-host\") pod \"crc-debug-d89r2\" (UID: \"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb\") " pod="openshift-must-gather-lkrx7/crc-debug-d89r2" Dec 04 07:15:07 crc kubenswrapper[4832]: I1204 07:15:07.621920 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5842n\" (UniqueName: \"kubernetes.io/projected/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb-kube-api-access-5842n\") pod \"crc-debug-d89r2\" (UID: \"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb\") " pod="openshift-must-gather-lkrx7/crc-debug-d89r2" Dec 04 07:15:07 crc kubenswrapper[4832]: I1204 07:15:07.723685 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb-host\") pod \"crc-debug-d89r2\" (UID: \"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb\") " pod="openshift-must-gather-lkrx7/crc-debug-d89r2" Dec 04 07:15:07 crc kubenswrapper[4832]: I1204 07:15:07.723870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5842n\" (UniqueName: \"kubernetes.io/projected/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb-kube-api-access-5842n\") pod \"crc-debug-d89r2\" (UID: \"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb\") " pod="openshift-must-gather-lkrx7/crc-debug-d89r2" Dec 04 07:15:07 crc kubenswrapper[4832]: I1204 07:15:07.723886 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb-host\") pod \"crc-debug-d89r2\" (UID: \"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb\") " pod="openshift-must-gather-lkrx7/crc-debug-d89r2" Dec 04 07:15:07 crc kubenswrapper[4832]: I1204 07:15:07.754326 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5842n\" (UniqueName: \"kubernetes.io/projected/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb-kube-api-access-5842n\") pod \"crc-debug-d89r2\" (UID: \"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb\") " pod="openshift-must-gather-lkrx7/crc-debug-d89r2" Dec 04 07:15:07 crc kubenswrapper[4832]: I1204 07:15:07.806504 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/crc-debug-d89r2" Dec 04 07:15:08 crc kubenswrapper[4832]: I1204 07:15:08.746437 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkrx7/crc-debug-d89r2" event={"ID":"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb","Type":"ContainerStarted","Data":"c51d5525d4765ddb08828b39aaf1961a1af91f61796ee536727ca53cdca5b7c5"} Dec 04 07:15:08 crc kubenswrapper[4832]: I1204 07:15:08.747330 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkrx7/crc-debug-d89r2" event={"ID":"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb","Type":"ContainerStarted","Data":"edaeab51efcaf715ca25cdd4664b722d76a5f0a810af7d1880ab94dfcbef3858"} Dec 04 07:15:08 crc kubenswrapper[4832]: I1204 07:15:08.775857 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lkrx7/crc-debug-d89r2" podStartSLOduration=1.775829648 podStartE2EDuration="1.775829648s" podCreationTimestamp="2025-12-04 07:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 07:15:08.76158341 +0000 UTC m=+3964.374401116" watchObservedRunningTime="2025-12-04 07:15:08.775829648 +0000 UTC m=+3964.388647354" Dec 04 07:15:22 crc kubenswrapper[4832]: I1204 07:15:22.461688 4832 scope.go:117] "RemoveContainer" containerID="174ec3940d55b507c8dd9634917ee301400b3bec6bb5ada7456cee9dab9dbfec" Dec 04 07:15:35 crc kubenswrapper[4832]: I1204 07:15:35.366347 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 07:15:35 crc kubenswrapper[4832]: I1204 07:15:35.366923 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 07:15:42 crc kubenswrapper[4832]: I1204 07:15:42.082932 4832 generic.go:334] "Generic (PLEG): container finished" podID="3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb" containerID="c51d5525d4765ddb08828b39aaf1961a1af91f61796ee536727ca53cdca5b7c5" exitCode=0 Dec 04 07:15:42 crc kubenswrapper[4832]: I1204 07:15:42.083544 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkrx7/crc-debug-d89r2" event={"ID":"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb","Type":"ContainerDied","Data":"c51d5525d4765ddb08828b39aaf1961a1af91f61796ee536727ca53cdca5b7c5"} Dec 04 07:15:43 crc kubenswrapper[4832]: I1204 07:15:43.202970 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/crc-debug-d89r2" Dec 04 07:15:43 crc kubenswrapper[4832]: I1204 07:15:43.243410 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lkrx7/crc-debug-d89r2"] Dec 04 07:15:43 crc kubenswrapper[4832]: I1204 07:15:43.254446 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lkrx7/crc-debug-d89r2"] Dec 04 07:15:43 crc kubenswrapper[4832]: I1204 07:15:43.337595 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5842n\" (UniqueName: \"kubernetes.io/projected/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb-kube-api-access-5842n\") pod \"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb\" (UID: \"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb\") " Dec 04 07:15:43 crc kubenswrapper[4832]: I1204 07:15:43.337741 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb-host\") pod \"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb\" (UID: \"3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb\") " Dec 04 07:15:43 crc kubenswrapper[4832]: I1204 07:15:43.338246 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb-host" (OuterVolumeSpecName: "host") pod "3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb" (UID: "3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 07:15:43 crc kubenswrapper[4832]: I1204 07:15:43.346446 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb-kube-api-access-5842n" (OuterVolumeSpecName: "kube-api-access-5842n") pod "3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb" (UID: "3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb"). InnerVolumeSpecName "kube-api-access-5842n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:15:43 crc kubenswrapper[4832]: I1204 07:15:43.441687 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5842n\" (UniqueName: \"kubernetes.io/projected/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb-kube-api-access-5842n\") on node \"crc\" DevicePath \"\"" Dec 04 07:15:43 crc kubenswrapper[4832]: I1204 07:15:43.441745 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb-host\") on node \"crc\" DevicePath \"\"" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.104531 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edaeab51efcaf715ca25cdd4664b722d76a5f0a810af7d1880ab94dfcbef3858" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.104617 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/crc-debug-d89r2" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.464102 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lkrx7/crc-debug-xwb6k"] Dec 04 07:15:44 crc kubenswrapper[4832]: E1204 07:15:44.464561 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb" containerName="container-00" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.464577 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb" containerName="container-00" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.464820 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb" containerName="container-00" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.466200 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.563848 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ccp6\" (UniqueName: \"kubernetes.io/projected/70cebede-ee73-4fb8-b694-dee38af04277-kube-api-access-2ccp6\") pod \"crc-debug-xwb6k\" (UID: \"70cebede-ee73-4fb8-b694-dee38af04277\") " pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.563954 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70cebede-ee73-4fb8-b694-dee38af04277-host\") pod \"crc-debug-xwb6k\" (UID: \"70cebede-ee73-4fb8-b694-dee38af04277\") " pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.665825 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ccp6\" (UniqueName: \"kubernetes.io/projected/70cebede-ee73-4fb8-b694-dee38af04277-kube-api-access-2ccp6\") pod \"crc-debug-xwb6k\" (UID: \"70cebede-ee73-4fb8-b694-dee38af04277\") " pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.665990 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70cebede-ee73-4fb8-b694-dee38af04277-host\") pod \"crc-debug-xwb6k\" (UID: \"70cebede-ee73-4fb8-b694-dee38af04277\") " pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.666192 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70cebede-ee73-4fb8-b694-dee38af04277-host\") pod \"crc-debug-xwb6k\" (UID: \"70cebede-ee73-4fb8-b694-dee38af04277\") " pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.692101 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ccp6\" (UniqueName: \"kubernetes.io/projected/70cebede-ee73-4fb8-b694-dee38af04277-kube-api-access-2ccp6\") pod \"crc-debug-xwb6k\" (UID: \"70cebede-ee73-4fb8-b694-dee38af04277\") " pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.723216 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb" path="/var/lib/kubelet/pods/3272711a-a58a-4bd3-aee8-b1f7b1fdd0eb/volumes" Dec 04 07:15:44 crc kubenswrapper[4832]: I1204 07:15:44.787674 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" Dec 04 07:15:45 crc kubenswrapper[4832]: I1204 07:15:45.123826 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" event={"ID":"70cebede-ee73-4fb8-b694-dee38af04277","Type":"ContainerStarted","Data":"280e01b3bfffe1f5d3a9782d0ab3510dda74775c4030bcba7ccaf6bed258b440"} Dec 04 07:15:45 crc kubenswrapper[4832]: I1204 07:15:45.124171 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" event={"ID":"70cebede-ee73-4fb8-b694-dee38af04277","Type":"ContainerStarted","Data":"9a62cf78376165440dee7df9b951444b3fad0c5161ca16fbfda0443cf02ad5e2"} Dec 04 07:15:45 crc kubenswrapper[4832]: I1204 07:15:45.144639 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" podStartSLOduration=1.144609245 podStartE2EDuration="1.144609245s" podCreationTimestamp="2025-12-04 07:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 07:15:45.140578546 +0000 UTC m=+4000.753396342" watchObservedRunningTime="2025-12-04 07:15:45.144609245 +0000 UTC m=+4000.757426951" Dec 04 07:15:46 crc kubenswrapper[4832]: I1204 07:15:46.135857 4832 generic.go:334] "Generic (PLEG): container finished" podID="70cebede-ee73-4fb8-b694-dee38af04277" containerID="280e01b3bfffe1f5d3a9782d0ab3510dda74775c4030bcba7ccaf6bed258b440" exitCode=0 Dec 04 07:15:46 crc kubenswrapper[4832]: I1204 07:15:46.135944 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" event={"ID":"70cebede-ee73-4fb8-b694-dee38af04277","Type":"ContainerDied","Data":"280e01b3bfffe1f5d3a9782d0ab3510dda74775c4030bcba7ccaf6bed258b440"} Dec 04 07:15:47 crc kubenswrapper[4832]: I1204 07:15:47.278105 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" Dec 04 07:15:47 crc kubenswrapper[4832]: I1204 07:15:47.330301 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lkrx7/crc-debug-xwb6k"] Dec 04 07:15:47 crc kubenswrapper[4832]: I1204 07:15:47.334459 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70cebede-ee73-4fb8-b694-dee38af04277-host\") pod \"70cebede-ee73-4fb8-b694-dee38af04277\" (UID: \"70cebede-ee73-4fb8-b694-dee38af04277\") " Dec 04 07:15:47 crc kubenswrapper[4832]: I1204 07:15:47.334576 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70cebede-ee73-4fb8-b694-dee38af04277-host" (OuterVolumeSpecName: "host") pod "70cebede-ee73-4fb8-b694-dee38af04277" (UID: "70cebede-ee73-4fb8-b694-dee38af04277"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 07:15:47 crc kubenswrapper[4832]: I1204 07:15:47.334721 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ccp6\" (UniqueName: \"kubernetes.io/projected/70cebede-ee73-4fb8-b694-dee38af04277-kube-api-access-2ccp6\") pod \"70cebede-ee73-4fb8-b694-dee38af04277\" (UID: \"70cebede-ee73-4fb8-b694-dee38af04277\") " Dec 04 07:15:47 crc kubenswrapper[4832]: I1204 07:15:47.335171 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70cebede-ee73-4fb8-b694-dee38af04277-host\") on node \"crc\" DevicePath \"\"" Dec 04 07:15:47 crc kubenswrapper[4832]: I1204 07:15:47.343448 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70cebede-ee73-4fb8-b694-dee38af04277-kube-api-access-2ccp6" (OuterVolumeSpecName: "kube-api-access-2ccp6") pod "70cebede-ee73-4fb8-b694-dee38af04277" (UID: "70cebede-ee73-4fb8-b694-dee38af04277"). InnerVolumeSpecName "kube-api-access-2ccp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:15:47 crc kubenswrapper[4832]: I1204 07:15:47.345336 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lkrx7/crc-debug-xwb6k"] Dec 04 07:15:47 crc kubenswrapper[4832]: I1204 07:15:47.435995 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ccp6\" (UniqueName: \"kubernetes.io/projected/70cebede-ee73-4fb8-b694-dee38af04277-kube-api-access-2ccp6\") on node \"crc\" DevicePath \"\"" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.160213 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a62cf78376165440dee7df9b951444b3fad0c5161ca16fbfda0443cf02ad5e2" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.160296 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/crc-debug-xwb6k" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.518924 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lkrx7/crc-debug-95c6h"] Dec 04 07:15:48 crc kubenswrapper[4832]: E1204 07:15:48.519343 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70cebede-ee73-4fb8-b694-dee38af04277" containerName="container-00" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.519356 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="70cebede-ee73-4fb8-b694-dee38af04277" containerName="container-00" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.519570 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="70cebede-ee73-4fb8-b694-dee38af04277" containerName="container-00" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.520229 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/crc-debug-95c6h" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.660604 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/680b50b2-86e3-43a1-968d-1f897c625ab1-host\") pod \"crc-debug-95c6h\" (UID: \"680b50b2-86e3-43a1-968d-1f897c625ab1\") " pod="openshift-must-gather-lkrx7/crc-debug-95c6h" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.661079 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dnfb\" (UniqueName: \"kubernetes.io/projected/680b50b2-86e3-43a1-968d-1f897c625ab1-kube-api-access-6dnfb\") pod \"crc-debug-95c6h\" (UID: \"680b50b2-86e3-43a1-968d-1f897c625ab1\") " pod="openshift-must-gather-lkrx7/crc-debug-95c6h" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.730297 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70cebede-ee73-4fb8-b694-dee38af04277" path="/var/lib/kubelet/pods/70cebede-ee73-4fb8-b694-dee38af04277/volumes" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.764638 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/680b50b2-86e3-43a1-968d-1f897c625ab1-host\") pod \"crc-debug-95c6h\" (UID: \"680b50b2-86e3-43a1-968d-1f897c625ab1\") " pod="openshift-must-gather-lkrx7/crc-debug-95c6h" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.764765 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dnfb\" (UniqueName: \"kubernetes.io/projected/680b50b2-86e3-43a1-968d-1f897c625ab1-kube-api-access-6dnfb\") pod \"crc-debug-95c6h\" (UID: \"680b50b2-86e3-43a1-968d-1f897c625ab1\") " pod="openshift-must-gather-lkrx7/crc-debug-95c6h" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.764952 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/680b50b2-86e3-43a1-968d-1f897c625ab1-host\") pod \"crc-debug-95c6h\" (UID: \"680b50b2-86e3-43a1-968d-1f897c625ab1\") " pod="openshift-must-gather-lkrx7/crc-debug-95c6h" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.802194 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dnfb\" (UniqueName: \"kubernetes.io/projected/680b50b2-86e3-43a1-968d-1f897c625ab1-kube-api-access-6dnfb\") pod \"crc-debug-95c6h\" (UID: \"680b50b2-86e3-43a1-968d-1f897c625ab1\") " pod="openshift-must-gather-lkrx7/crc-debug-95c6h" Dec 04 07:15:48 crc kubenswrapper[4832]: I1204 07:15:48.846181 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/crc-debug-95c6h" Dec 04 07:15:48 crc kubenswrapper[4832]: W1204 07:15:48.904832 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod680b50b2_86e3_43a1_968d_1f897c625ab1.slice/crio-5b3de88b9d371024e0ce91c39690514ac92cc86cc56409a5700efcbf676843d4 WatchSource:0}: Error finding container 5b3de88b9d371024e0ce91c39690514ac92cc86cc56409a5700efcbf676843d4: Status 404 returned error can't find the container with id 5b3de88b9d371024e0ce91c39690514ac92cc86cc56409a5700efcbf676843d4 Dec 04 07:15:49 crc kubenswrapper[4832]: I1204 07:15:49.182293 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkrx7/crc-debug-95c6h" event={"ID":"680b50b2-86e3-43a1-968d-1f897c625ab1","Type":"ContainerStarted","Data":"5b3de88b9d371024e0ce91c39690514ac92cc86cc56409a5700efcbf676843d4"} Dec 04 07:15:50 crc kubenswrapper[4832]: I1204 07:15:50.206659 4832 generic.go:334] "Generic (PLEG): container finished" podID="680b50b2-86e3-43a1-968d-1f897c625ab1" containerID="c69a912ba2159554bf4d631b2fc8f78db9bcb89add053727f2b6a7a2f9baf0c8" exitCode=0 Dec 04 07:15:50 crc kubenswrapper[4832]: I1204 07:15:50.207716 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkrx7/crc-debug-95c6h" event={"ID":"680b50b2-86e3-43a1-968d-1f897c625ab1","Type":"ContainerDied","Data":"c69a912ba2159554bf4d631b2fc8f78db9bcb89add053727f2b6a7a2f9baf0c8"} Dec 04 07:15:50 crc kubenswrapper[4832]: I1204 07:15:50.275114 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lkrx7/crc-debug-95c6h"] Dec 04 07:15:50 crc kubenswrapper[4832]: I1204 07:15:50.286707 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lkrx7/crc-debug-95c6h"] Dec 04 07:15:51 crc kubenswrapper[4832]: I1204 07:15:51.318912 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/crc-debug-95c6h" Dec 04 07:15:51 crc kubenswrapper[4832]: I1204 07:15:51.332075 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dnfb\" (UniqueName: \"kubernetes.io/projected/680b50b2-86e3-43a1-968d-1f897c625ab1-kube-api-access-6dnfb\") pod \"680b50b2-86e3-43a1-968d-1f897c625ab1\" (UID: \"680b50b2-86e3-43a1-968d-1f897c625ab1\") " Dec 04 07:15:51 crc kubenswrapper[4832]: I1204 07:15:51.332137 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/680b50b2-86e3-43a1-968d-1f897c625ab1-host\") pod \"680b50b2-86e3-43a1-968d-1f897c625ab1\" (UID: \"680b50b2-86e3-43a1-968d-1f897c625ab1\") " Dec 04 07:15:51 crc kubenswrapper[4832]: I1204 07:15:51.332239 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/680b50b2-86e3-43a1-968d-1f897c625ab1-host" (OuterVolumeSpecName: "host") pod "680b50b2-86e3-43a1-968d-1f897c625ab1" (UID: "680b50b2-86e3-43a1-968d-1f897c625ab1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 07:15:51 crc kubenswrapper[4832]: I1204 07:15:51.333176 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/680b50b2-86e3-43a1-968d-1f897c625ab1-host\") on node \"crc\" DevicePath \"\"" Dec 04 07:15:51 crc kubenswrapper[4832]: I1204 07:15:51.338692 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/680b50b2-86e3-43a1-968d-1f897c625ab1-kube-api-access-6dnfb" (OuterVolumeSpecName: "kube-api-access-6dnfb") pod "680b50b2-86e3-43a1-968d-1f897c625ab1" (UID: "680b50b2-86e3-43a1-968d-1f897c625ab1"). InnerVolumeSpecName "kube-api-access-6dnfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:15:51 crc kubenswrapper[4832]: I1204 07:15:51.434172 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dnfb\" (UniqueName: \"kubernetes.io/projected/680b50b2-86e3-43a1-968d-1f897c625ab1-kube-api-access-6dnfb\") on node \"crc\" DevicePath \"\"" Dec 04 07:15:52 crc kubenswrapper[4832]: I1204 07:15:52.227641 4832 scope.go:117] "RemoveContainer" containerID="c69a912ba2159554bf4d631b2fc8f78db9bcb89add053727f2b6a7a2f9baf0c8" Dec 04 07:15:52 crc kubenswrapper[4832]: I1204 07:15:52.227797 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/crc-debug-95c6h" Dec 04 07:15:52 crc kubenswrapper[4832]: I1204 07:15:52.724937 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="680b50b2-86e3-43a1-968d-1f897c625ab1" path="/var/lib/kubelet/pods/680b50b2-86e3-43a1-968d-1f897c625ab1/volumes" Dec 04 07:16:05 crc kubenswrapper[4832]: I1204 07:16:05.362840 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 07:16:05 crc kubenswrapper[4832]: I1204 07:16:05.363611 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 07:16:16 crc kubenswrapper[4832]: I1204 07:16:16.845896 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5869d975cd-47z8d_26ad016c-9e7b-49bd-9031-830f8319a79d/barbican-api/0.log" Dec 04 07:16:17 crc kubenswrapper[4832]: I1204 07:16:17.031015 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5869d975cd-47z8d_26ad016c-9e7b-49bd-9031-830f8319a79d/barbican-api-log/0.log" Dec 04 07:16:17 crc kubenswrapper[4832]: I1204 07:16:17.045300 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6955d5c798-vn8dg_94c9f353-9085-4009-b151-3d5f9418148e/barbican-keystone-listener/0.log" Dec 04 07:16:17 crc kubenswrapper[4832]: I1204 07:16:17.158499 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6955d5c798-vn8dg_94c9f353-9085-4009-b151-3d5f9418148e/barbican-keystone-listener-log/0.log" Dec 04 07:16:17 crc kubenswrapper[4832]: I1204 07:16:17.264490 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76cc4f7d9f-s989p_4b753665-a7f4-4c62-b4ee-a0842bbbe487/barbican-worker/0.log" Dec 04 07:16:17 crc kubenswrapper[4832]: I1204 07:16:17.364297 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76cc4f7d9f-s989p_4b753665-a7f4-4c62-b4ee-a0842bbbe487/barbican-worker-log/0.log" Dec 04 07:16:17 crc kubenswrapper[4832]: I1204 07:16:17.513000 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-dvg5m_45d69295-db9b-4a70-a031-2e19abcf6be1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:17 crc kubenswrapper[4832]: I1204 07:16:17.649956 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2201e018-55df-4295-b234-ae553e00f058/ceilometer-central-agent/0.log" Dec 04 07:16:17 crc kubenswrapper[4832]: I1204 07:16:17.717709 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2201e018-55df-4295-b234-ae553e00f058/ceilometer-notification-agent/0.log" Dec 04 07:16:17 crc kubenswrapper[4832]: I1204 07:16:17.719201 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2201e018-55df-4295-b234-ae553e00f058/proxy-httpd/0.log" Dec 04 07:16:17 crc kubenswrapper[4832]: I1204 07:16:17.811134 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2201e018-55df-4295-b234-ae553e00f058/sg-core/0.log" Dec 04 07:16:17 crc kubenswrapper[4832]: I1204 07:16:17.937182 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9b68da07-a347-452b-85c0-ba171d852d15/cinder-api-log/0.log" Dec 04 07:16:17 crc kubenswrapper[4832]: I1204 07:16:17.995112 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9b68da07-a347-452b-85c0-ba171d852d15/cinder-api/0.log" Dec 04 07:16:18 crc kubenswrapper[4832]: I1204 07:16:18.202607 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3fc4e266-71ba-403f-a4a3-6c7dc12995b7/cinder-scheduler/0.log" Dec 04 07:16:18 crc kubenswrapper[4832]: I1204 07:16:18.229877 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3fc4e266-71ba-403f-a4a3-6c7dc12995b7/probe/0.log" Dec 04 07:16:18 crc kubenswrapper[4832]: I1204 07:16:18.408342 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jcwnx_4fdaa066-59c4-4491-961c-d72bb1a75243/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:18 crc kubenswrapper[4832]: I1204 07:16:18.462287 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9p6vm_452e59ff-3e14-4082-b812-ff4d5d671b27/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:18 crc kubenswrapper[4832]: I1204 07:16:18.614178 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-wpmzl_fc3ad9fb-9341-4b1f-8b27-ee71d9f37309/init/0.log" Dec 04 07:16:18 crc kubenswrapper[4832]: I1204 07:16:18.840057 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-wpmzl_fc3ad9fb-9341-4b1f-8b27-ee71d9f37309/init/0.log" Dec 04 07:16:18 crc kubenswrapper[4832]: I1204 07:16:18.894944 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-wpmzl_fc3ad9fb-9341-4b1f-8b27-ee71d9f37309/dnsmasq-dns/0.log" Dec 04 07:16:18 crc kubenswrapper[4832]: I1204 07:16:18.912998 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-t96k5_a88a60b4-19c2-4ef9-b586-2b6733219e7a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:19 crc kubenswrapper[4832]: I1204 07:16:19.123637 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_19e43c1f-dcda-45c3-84aa-fe00d987d334/glance-httpd/0.log" Dec 04 07:16:19 crc kubenswrapper[4832]: I1204 07:16:19.156617 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_19e43c1f-dcda-45c3-84aa-fe00d987d334/glance-log/0.log" Dec 04 07:16:19 crc kubenswrapper[4832]: I1204 07:16:19.334806 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_44deec12-659d-4dcf-a08b-252f6a004f0b/glance-log/0.log" Dec 04 07:16:19 crc kubenswrapper[4832]: I1204 07:16:19.342845 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_44deec12-659d-4dcf-a08b-252f6a004f0b/glance-httpd/0.log" Dec 04 07:16:19 crc kubenswrapper[4832]: I1204 07:16:19.546974 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-847bcdcbb8-ph9ks_a75235c9-c000-495b-92d7-797733f10601/horizon/0.log" Dec 04 07:16:19 crc kubenswrapper[4832]: I1204 07:16:19.655960 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vlsjj_fa905232-11b8-4af4-96a8-7a7ef46bf17d/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:19 crc kubenswrapper[4832]: I1204 07:16:19.859863 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-nhghp_069bfa79-a14b-4545-a791-be3f21ed774f/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:20 crc kubenswrapper[4832]: I1204 07:16:20.010306 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-847bcdcbb8-ph9ks_a75235c9-c000-495b-92d7-797733f10601/horizon-log/0.log" Dec 04 07:16:20 crc kubenswrapper[4832]: I1204 07:16:20.228255 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5fb459446f-clqb5_dceba324-ac23-407c-ac0d-7ca4abce124d/keystone-api/0.log" Dec 04 07:16:20 crc kubenswrapper[4832]: I1204 07:16:20.234858 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29413861-44mgd_38e7bb37-9dd3-4010-89f6-e49c6d710eab/keystone-cron/0.log" Dec 04 07:16:20 crc kubenswrapper[4832]: I1204 07:16:20.410952 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a8f66227-3513-4327-81ec-2f1f147294e8/kube-state-metrics/0.log" Dec 04 07:16:20 crc kubenswrapper[4832]: I1204 07:16:20.481144 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-d2cp2_9e2d5cc2-a6c8-4953-8c34-650c047c5848/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:20 crc kubenswrapper[4832]: I1204 07:16:20.872894 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bdbc57ff5-2cpdh_0864aed7-87aa-47d4-b38e-17d8863bb83e/neutron-httpd/0.log" Dec 04 07:16:20 crc kubenswrapper[4832]: I1204 07:16:20.944018 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bdbc57ff5-2cpdh_0864aed7-87aa-47d4-b38e-17d8863bb83e/neutron-api/0.log" Dec 04 07:16:21 crc kubenswrapper[4832]: I1204 07:16:21.046925 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rpsx_27b9693c-3bb0-4819-bbcf-87634b6bb8e3/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:21 crc kubenswrapper[4832]: I1204 07:16:21.614455 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4/nova-api-log/0.log" Dec 04 07:16:21 crc kubenswrapper[4832]: I1204 07:16:21.713976 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_83d2f2b1-c068-4912-9c17-adb96b1d9233/nova-cell0-conductor-conductor/0.log" Dec 04 07:16:22 crc kubenswrapper[4832]: I1204 07:16:22.046367 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ba7c37ae-34b2-4bf6-8d71-13a61f8b5da4/nova-api-api/0.log" Dec 04 07:16:22 crc kubenswrapper[4832]: I1204 07:16:22.056908 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d02a8c92-b3b4-4c91-8d11-e0937e4928e0/nova-cell1-conductor-conductor/0.log" Dec 04 07:16:22 crc kubenswrapper[4832]: I1204 07:16:22.121153 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_1c696d5d-0ba4-4406-aa1e-ef9d99df8136/nova-cell1-novncproxy-novncproxy/0.log" Dec 04 07:16:22 crc kubenswrapper[4832]: I1204 07:16:22.817267 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-gklbz_dcd764c1-4caa-4556-a755-3237f104b88e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:22 crc kubenswrapper[4832]: I1204 07:16:22.961174 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2f30e5fb-d9e0-4048-9e6e-3559465be9d4/nova-metadata-log/0.log" Dec 04 07:16:23 crc kubenswrapper[4832]: I1204 07:16:23.309943 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9841a1c2-83f5-475b-8180-b1e9cd13467b/mysql-bootstrap/0.log" Dec 04 07:16:23 crc kubenswrapper[4832]: I1204 07:16:23.448503 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4e159726-cef9-46df-b183-6b0b2f5b013e/nova-scheduler-scheduler/0.log" Dec 04 07:16:23 crc kubenswrapper[4832]: I1204 07:16:23.551420 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9841a1c2-83f5-475b-8180-b1e9cd13467b/mysql-bootstrap/0.log" Dec 04 07:16:23 crc kubenswrapper[4832]: I1204 07:16:23.561484 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9841a1c2-83f5-475b-8180-b1e9cd13467b/galera/0.log" Dec 04 07:16:23 crc kubenswrapper[4832]: I1204 07:16:23.794787 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_22fcd5ed-0004-4329-b8c6-7855939765dc/mysql-bootstrap/0.log" Dec 04 07:16:24 crc kubenswrapper[4832]: I1204 07:16:24.019449 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_22fcd5ed-0004-4329-b8c6-7855939765dc/mysql-bootstrap/0.log" Dec 04 07:16:24 crc kubenswrapper[4832]: I1204 07:16:24.029954 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_22fcd5ed-0004-4329-b8c6-7855939765dc/galera/0.log" Dec 04 07:16:24 crc kubenswrapper[4832]: I1204 07:16:24.265021 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4936447c-abfd-4bad-b720-db17f1bca70c/openstackclient/0.log" Dec 04 07:16:24 crc kubenswrapper[4832]: I1204 07:16:24.385535 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kcxl8_6de6fb2f-c87b-41af-8e93-05d7da0fad2a/ovn-controller/0.log" Dec 04 07:16:24 crc kubenswrapper[4832]: I1204 07:16:24.518845 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2f30e5fb-d9e0-4048-9e6e-3559465be9d4/nova-metadata-metadata/0.log" Dec 04 07:16:24 crc kubenswrapper[4832]: I1204 07:16:24.531601 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xv2xq_bc55e7c7-bd80-4440-922d-d0711af4b912/openstack-network-exporter/0.log" Dec 04 07:16:25 crc kubenswrapper[4832]: I1204 07:16:25.056555 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m96v7_f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf/ovsdb-server-init/0.log" Dec 04 07:16:25 crc kubenswrapper[4832]: I1204 07:16:25.315594 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m96v7_f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf/ovsdb-server-init/0.log" Dec 04 07:16:25 crc kubenswrapper[4832]: I1204 07:16:25.325176 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m96v7_f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf/ovsdb-server/0.log" Dec 04 07:16:25 crc kubenswrapper[4832]: I1204 07:16:25.336507 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m96v7_f8ddd7e9-d452-4a6b-8de1-9aaabbcc98bf/ovs-vswitchd/0.log" Dec 04 07:16:25 crc kubenswrapper[4832]: I1204 07:16:25.582946 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9j4xg_7cf92a16-6e70-4b63-a14e-30a5b041a80f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:25 crc kubenswrapper[4832]: I1204 07:16:25.606701 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9b61be98-5007-43c6-b717-dae011be5830/openstack-network-exporter/0.log" Dec 04 07:16:25 crc kubenswrapper[4832]: I1204 07:16:25.745322 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9b61be98-5007-43c6-b717-dae011be5830/ovn-northd/0.log" Dec 04 07:16:25 crc kubenswrapper[4832]: I1204 07:16:25.809409 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8f89488d-b176-4bf6-9172-ed2fc6492019/openstack-network-exporter/0.log" Dec 04 07:16:25 crc kubenswrapper[4832]: I1204 07:16:25.820052 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8f89488d-b176-4bf6-9172-ed2fc6492019/ovsdbserver-nb/0.log" Dec 04 07:16:26 crc kubenswrapper[4832]: I1204 07:16:26.014827 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a470eda9-a394-4ecb-a723-404f00bbd45a/openstack-network-exporter/0.log" Dec 04 07:16:26 crc kubenswrapper[4832]: I1204 07:16:26.044841 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a470eda9-a394-4ecb-a723-404f00bbd45a/ovsdbserver-sb/0.log" Dec 04 07:16:26 crc kubenswrapper[4832]: I1204 07:16:26.386369 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-56555b86cd-htxqh_d282cab8-b359-4fc9-9f34-95b8b1984106/placement-api/0.log" Dec 04 07:16:26 crc kubenswrapper[4832]: I1204 07:16:26.446290 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_65d1124e-f647-4d3c-b10e-c01691fb6c9b/setup-container/0.log" Dec 04 07:16:26 crc kubenswrapper[4832]: I1204 07:16:26.480494 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-56555b86cd-htxqh_d282cab8-b359-4fc9-9f34-95b8b1984106/placement-log/0.log" Dec 04 07:16:26 crc kubenswrapper[4832]: I1204 07:16:26.683068 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_65d1124e-f647-4d3c-b10e-c01691fb6c9b/setup-container/0.log" Dec 04 07:16:26 crc kubenswrapper[4832]: I1204 07:16:26.769717 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_65d1124e-f647-4d3c-b10e-c01691fb6c9b/rabbitmq/0.log" Dec 04 07:16:26 crc kubenswrapper[4832]: I1204 07:16:26.964312 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5152b11-80fa-4fd7-90df-132972214b18/setup-container/0.log" Dec 04 07:16:27 crc kubenswrapper[4832]: I1204 07:16:27.231606 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5152b11-80fa-4fd7-90df-132972214b18/setup-container/0.log" Dec 04 07:16:27 crc kubenswrapper[4832]: I1204 07:16:27.294616 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-4dlfc_98e6fcf2-9409-4e06-846b-d96d4106e2b8/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:27 crc kubenswrapper[4832]: I1204 07:16:27.385496 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5152b11-80fa-4fd7-90df-132972214b18/rabbitmq/0.log" Dec 04 07:16:27 crc kubenswrapper[4832]: I1204 07:16:27.606122 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-dzcnn_98f3d48c-b338-4b27-893d-f83a3e55ccd8/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:27 crc kubenswrapper[4832]: I1204 07:16:27.610941 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-tvnzh_2f4cc7d6-382c-46c7-a422-d728d7d8aa19/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:27 crc kubenswrapper[4832]: I1204 07:16:27.820022 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-95gcb_cbeb7492-95ba-4887-afee-a0fada68f151/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:27 crc kubenswrapper[4832]: I1204 07:16:27.909609 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-ztwr8_54ad7a01-5a9d-4735-b86d-391a24a663ad/ssh-known-hosts-edpm-deployment/0.log" Dec 04 07:16:28 crc kubenswrapper[4832]: I1204 07:16:28.125824 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8675b9cf45-xl2pz_6b7c2a80-b3fe-4243-9ea6-19e34f132a16/proxy-server/0.log" Dec 04 07:16:28 crc kubenswrapper[4832]: I1204 07:16:28.286843 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8675b9cf45-xl2pz_6b7c2a80-b3fe-4243-9ea6-19e34f132a16/proxy-httpd/0.log" Dec 04 07:16:28 crc kubenswrapper[4832]: I1204 07:16:28.379910 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vnbbf_2aaa5481-3d69-438a-80be-5511ecc55ddf/swift-ring-rebalance/0.log" Dec 04 07:16:28 crc kubenswrapper[4832]: I1204 07:16:28.527736 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/account-auditor/0.log" Dec 04 07:16:28 crc kubenswrapper[4832]: I1204 07:16:28.533805 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/account-reaper/0.log" Dec 04 07:16:28 crc kubenswrapper[4832]: I1204 07:16:28.673057 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/account-replicator/0.log" Dec 04 07:16:28 crc kubenswrapper[4832]: I1204 07:16:28.793372 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/container-auditor/0.log" Dec 04 07:16:28 crc kubenswrapper[4832]: I1204 07:16:28.808727 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/account-server/0.log" Dec 04 07:16:28 crc kubenswrapper[4832]: I1204 07:16:28.851486 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/container-replicator/0.log" Dec 04 07:16:28 crc kubenswrapper[4832]: I1204 07:16:28.946204 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/container-server/0.log" Dec 04 07:16:29 crc kubenswrapper[4832]: I1204 07:16:29.023544 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/container-updater/0.log" Dec 04 07:16:29 crc kubenswrapper[4832]: I1204 07:16:29.126801 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/object-auditor/0.log" Dec 04 07:16:29 crc kubenswrapper[4832]: I1204 07:16:29.145356 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/object-expirer/0.log" Dec 04 07:16:29 crc kubenswrapper[4832]: I1204 07:16:29.257138 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/object-server/0.log" Dec 04 07:16:29 crc kubenswrapper[4832]: I1204 07:16:29.267959 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/object-replicator/0.log" Dec 04 07:16:29 crc kubenswrapper[4832]: I1204 07:16:29.395089 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/object-updater/0.log" Dec 04 07:16:29 crc kubenswrapper[4832]: I1204 07:16:29.429276 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/rsync/0.log" Dec 04 07:16:29 crc kubenswrapper[4832]: I1204 07:16:29.481119 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_5889bafa-1999-43e3-846b-234db0db6e83/swift-recon-cron/0.log" Dec 04 07:16:29 crc kubenswrapper[4832]: I1204 07:16:29.692207 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hlb5p_69a026e8-d207-4ccd-86c7-6e646a80529c/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:29 crc kubenswrapper[4832]: I1204 07:16:29.713465 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_068b63a2-ea9f-4022-8a42-8d345222f5a7/tempest-tests-tempest-tests-runner/0.log" Dec 04 07:16:29 crc kubenswrapper[4832]: I1204 07:16:29.904102 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b6d3a092-b799-497b-9ca7-10f0578b0f7b/test-operator-logs-container/0.log" Dec 04 07:16:30 crc kubenswrapper[4832]: I1204 07:16:30.038589 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-smwz7_c71ce848-b1d4-4cf6-8e0b-05f8ffc5ecac/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 07:16:35 crc kubenswrapper[4832]: I1204 07:16:35.362212 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 07:16:35 crc kubenswrapper[4832]: I1204 07:16:35.362800 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 07:16:35 crc kubenswrapper[4832]: I1204 07:16:35.362851 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 07:16:35 crc kubenswrapper[4832]: I1204 07:16:35.363690 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3fa00dd8f0c116624370f95fff5190bd09266270dbd554276712c48bf66ab985"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 07:16:35 crc kubenswrapper[4832]: I1204 07:16:35.363737 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://3fa00dd8f0c116624370f95fff5190bd09266270dbd554276712c48bf66ab985" gracePeriod=600 Dec 04 07:16:35 crc kubenswrapper[4832]: I1204 07:16:35.670932 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="3fa00dd8f0c116624370f95fff5190bd09266270dbd554276712c48bf66ab985" exitCode=0 Dec 04 07:16:35 crc kubenswrapper[4832]: I1204 07:16:35.670997 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"3fa00dd8f0c116624370f95fff5190bd09266270dbd554276712c48bf66ab985"} Dec 04 07:16:35 crc kubenswrapper[4832]: I1204 07:16:35.671080 4832 scope.go:117] "RemoveContainer" containerID="571248938e87753730c1d049962d15d4a234778b06f1a77767e8ea10bef603ac" Dec 04 07:16:36 crc kubenswrapper[4832]: I1204 07:16:36.702610 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerStarted","Data":"61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830"} Dec 04 07:16:39 crc kubenswrapper[4832]: I1204 07:16:39.101448 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e3b075a1-3f92-493c-93d2-a776141dba44/memcached/0.log" Dec 04 07:16:58 crc kubenswrapper[4832]: I1204 07:16:58.466001 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/util/0.log" Dec 04 07:16:58 crc kubenswrapper[4832]: I1204 07:16:58.741199 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/util/0.log" Dec 04 07:16:58 crc kubenswrapper[4832]: I1204 07:16:58.749780 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/pull/0.log" Dec 04 07:16:58 crc kubenswrapper[4832]: I1204 07:16:58.779329 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/pull/0.log" Dec 04 07:16:59 crc kubenswrapper[4832]: I1204 07:16:59.041521 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/util/0.log" Dec 04 07:16:59 crc kubenswrapper[4832]: I1204 07:16:59.074504 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/pull/0.log" Dec 04 07:16:59 crc kubenswrapper[4832]: I1204 07:16:59.099726 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_94924492a91c9de3b6fb6c8886ce5f89dd6c171166f9169ee642763bc6vlrbr_b7bde3e6-de8b-40eb-abe9-6c923b41530b/extract/0.log" Dec 04 07:16:59 crc kubenswrapper[4832]: I1204 07:16:59.338761 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-vjxxr_a85cdbe2-2e25-43b2-bcad-55aaf1e6755d/manager/0.log" Dec 04 07:16:59 crc kubenswrapper[4832]: I1204 07:16:59.344627 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-wwmfh_ef8f8bec-efa4-4239-839d-791aed710641/kube-rbac-proxy/0.log" Dec 04 07:16:59 crc kubenswrapper[4832]: I1204 07:16:59.367544 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-vjxxr_a85cdbe2-2e25-43b2-bcad-55aaf1e6755d/kube-rbac-proxy/0.log" Dec 04 07:16:59 crc kubenswrapper[4832]: I1204 07:16:59.583422 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-wwmfh_ef8f8bec-efa4-4239-839d-791aed710641/manager/0.log" Dec 04 07:16:59 crc kubenswrapper[4832]: I1204 07:16:59.646206 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-9cmtc_49edbb71-76d8-4f14-986d-9fd821c55ff4/kube-rbac-proxy/0.log" Dec 04 07:16:59 crc kubenswrapper[4832]: I1204 07:16:59.691164 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-9cmtc_49edbb71-76d8-4f14-986d-9fd821c55ff4/manager/0.log" Dec 04 07:16:59 crc kubenswrapper[4832]: I1204 07:16:59.824604 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-s5wdp_f17d47bc-9039-4195-bdbd-e9f58d4c305b/kube-rbac-proxy/0.log" Dec 04 07:16:59 crc kubenswrapper[4832]: I1204 07:16:59.982190 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-s5wdp_f17d47bc-9039-4195-bdbd-e9f58d4c305b/manager/0.log" Dec 04 07:17:00 crc kubenswrapper[4832]: I1204 07:17:00.028994 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-7x9qz_860c33f9-d57a-45b6-bc73-670d92e753a4/kube-rbac-proxy/0.log" Dec 04 07:17:00 crc kubenswrapper[4832]: I1204 07:17:00.088441 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-7x9qz_860c33f9-d57a-45b6-bc73-670d92e753a4/manager/0.log" Dec 04 07:17:00 crc kubenswrapper[4832]: I1204 07:17:00.632837 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-8qb2w_e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f/kube-rbac-proxy/0.log" Dec 04 07:17:00 crc kubenswrapper[4832]: I1204 07:17:00.727303 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-8qb2w_e2a00f81-6eba-4338-adb6-f7ccfd9ccc4f/manager/0.log" Dec 04 07:17:00 crc kubenswrapper[4832]: I1204 07:17:00.851029 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-wr29d_69747c52-1139-4d71-be0d-d6b8d534f0bf/kube-rbac-proxy/0.log" Dec 04 07:17:00 crc kubenswrapper[4832]: I1204 07:17:00.985879 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-6shfb_2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9/kube-rbac-proxy/0.log" Dec 04 07:17:01 crc kubenswrapper[4832]: I1204 07:17:01.076570 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-wr29d_69747c52-1139-4d71-be0d-d6b8d534f0bf/manager/0.log" Dec 04 07:17:01 crc kubenswrapper[4832]: I1204 07:17:01.122420 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-6shfb_2e3827ee-c8ae-4c96-b7ef-8605bcf4e2d9/manager/0.log" Dec 04 07:17:01 crc kubenswrapper[4832]: I1204 07:17:01.235301 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-xd7gs_84bf2c21-9b47-46f8-970e-e2e34c5d0112/kube-rbac-proxy/0.log" Dec 04 07:17:01 crc kubenswrapper[4832]: I1204 07:17:01.390288 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-xd7gs_84bf2c21-9b47-46f8-970e-e2e34c5d0112/manager/0.log" Dec 04 07:17:01 crc kubenswrapper[4832]: I1204 07:17:01.475773 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-zd4wx_7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df/kube-rbac-proxy/0.log" Dec 04 07:17:01 crc kubenswrapper[4832]: I1204 07:17:01.523110 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-zd4wx_7d7242e2-f1a1-4bbc-b9e8-fdb337cc74df/manager/0.log" Dec 04 07:17:01 crc kubenswrapper[4832]: I1204 07:17:01.663892 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-hwpjd_c0cedc81-309b-4d1f-8349-632ca9d38e96/kube-rbac-proxy/0.log" Dec 04 07:17:01 crc kubenswrapper[4832]: I1204 07:17:01.725071 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-hwpjd_c0cedc81-309b-4d1f-8349-632ca9d38e96/manager/0.log" Dec 04 07:17:01 crc kubenswrapper[4832]: I1204 07:17:01.835870 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-dr2cc_35d20429-0e0e-4090-8d0b-9a590e8fd9ab/kube-rbac-proxy/0.log" Dec 04 07:17:01 crc kubenswrapper[4832]: I1204 07:17:01.944760 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-dr2cc_35d20429-0e0e-4090-8d0b-9a590e8fd9ab/manager/0.log" Dec 04 07:17:02 crc kubenswrapper[4832]: I1204 07:17:02.084890 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-djqmz_81848f9c-5ee4-4fbc-a744-701009bcbe53/kube-rbac-proxy/0.log" Dec 04 07:17:02 crc kubenswrapper[4832]: I1204 07:17:02.129435 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-djqmz_81848f9c-5ee4-4fbc-a744-701009bcbe53/manager/0.log" Dec 04 07:17:02 crc kubenswrapper[4832]: I1204 07:17:02.243771 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-zgnkq_e8500aa8-6a4f-4d7b-8939-eab62a946850/kube-rbac-proxy/0.log" Dec 04 07:17:02 crc kubenswrapper[4832]: I1204 07:17:02.269236 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-zgnkq_e8500aa8-6a4f-4d7b-8939-eab62a946850/manager/0.log" Dec 04 07:17:02 crc kubenswrapper[4832]: I1204 07:17:02.401484 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr_4226c957-fd5d-4b1d-84ca-a94e76ff138c/kube-rbac-proxy/0.log" Dec 04 07:17:02 crc kubenswrapper[4832]: I1204 07:17:02.449238 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4fn2hr_4226c957-fd5d-4b1d-84ca-a94e76ff138c/manager/0.log" Dec 04 07:17:02 crc kubenswrapper[4832]: I1204 07:17:02.852026 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hzcq6_37203d32-9ea9-4649-b269-71beabc056f9/registry-server/0.log" Dec 04 07:17:02 crc kubenswrapper[4832]: I1204 07:17:02.865585 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7c75cfccc8-zchmr_5413f6c9-52d6-44d8-b58b-babf5f5d4541/operator/0.log" Dec 04 07:17:03 crc kubenswrapper[4832]: I1204 07:17:03.032951 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lbvnf_63f185bd-a5f7-40a2-b51f-f60bf2c161a9/kube-rbac-proxy/0.log" Dec 04 07:17:03 crc kubenswrapper[4832]: I1204 07:17:03.200774 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-lr247_7184b79e-0476-4d6d-99f3-329ad46dff61/kube-rbac-proxy/0.log" Dec 04 07:17:03 crc kubenswrapper[4832]: I1204 07:17:03.248578 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lbvnf_63f185bd-a5f7-40a2-b51f-f60bf2c161a9/manager/0.log" Dec 04 07:17:03 crc kubenswrapper[4832]: I1204 07:17:03.278677 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-lr247_7184b79e-0476-4d6d-99f3-329ad46dff61/manager/0.log" Dec 04 07:17:03 crc kubenswrapper[4832]: I1204 07:17:03.474830 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qx7fl_626ec042-7ccd-4a54-8625-de8861efca16/operator/0.log" Dec 04 07:17:03 crc kubenswrapper[4832]: I1204 07:17:03.600456 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-wjbl9_cac84290-1321-4a86-a4c0-06019e9d5dfd/kube-rbac-proxy/0.log" Dec 04 07:17:03 crc kubenswrapper[4832]: I1204 07:17:03.742766 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-wjbl9_cac84290-1321-4a86-a4c0-06019e9d5dfd/manager/0.log" Dec 04 07:17:03 crc kubenswrapper[4832]: I1204 07:17:03.824444 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-fl4dk_ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea/kube-rbac-proxy/0.log" Dec 04 07:17:03 crc kubenswrapper[4832]: I1204 07:17:03.920839 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-fl4dk_ce4386f3-0e68-4f17-a9b5-ab9197e4c8ea/manager/0.log" Dec 04 07:17:03 crc kubenswrapper[4832]: I1204 07:17:03.923232 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5986db9d67-699q9_57013f06-c328-4c9c-b4c9-284df662cc0e/manager/0.log" Dec 04 07:17:04 crc kubenswrapper[4832]: I1204 07:17:04.034321 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-zc52r_f897a405-3157-4e56-b2b8-1076557cab9e/kube-rbac-proxy/0.log" Dec 04 07:17:04 crc kubenswrapper[4832]: I1204 07:17:04.071753 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-zc52r_f897a405-3157-4e56-b2b8-1076557cab9e/manager/0.log" Dec 04 07:17:04 crc kubenswrapper[4832]: I1204 07:17:04.166195 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-htcvz_e6d35b26-0a9e-4174-a073-d0a608dbafcd/kube-rbac-proxy/0.log" Dec 04 07:17:04 crc kubenswrapper[4832]: I1204 07:17:04.235987 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-htcvz_e6d35b26-0a9e-4174-a073-d0a608dbafcd/manager/0.log" Dec 04 07:17:24 crc kubenswrapper[4832]: I1204 07:17:24.854584 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zzf4r_d35e6baa-6315-48ee-904c-05da7d436283/control-plane-machine-set-operator/0.log" Dec 04 07:17:25 crc kubenswrapper[4832]: I1204 07:17:25.034817 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gcvsv_8b214f93-e9ab-4500-9c6b-6319c5570459/kube-rbac-proxy/0.log" Dec 04 07:17:25 crc kubenswrapper[4832]: I1204 07:17:25.518346 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gcvsv_8b214f93-e9ab-4500-9c6b-6319c5570459/machine-api-operator/0.log" Dec 04 07:17:38 crc kubenswrapper[4832]: I1204 07:17:38.817446 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-7cv2p_982879a5-56a8-46a1-ac5f-73023f9a1ddc/cert-manager-controller/0.log" Dec 04 07:17:38 crc kubenswrapper[4832]: I1204 07:17:38.937124 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-8smzg_ce0aa020-53b7-4687-b620-659e270dbcc3/cert-manager-cainjector/0.log" Dec 04 07:17:38 crc kubenswrapper[4832]: I1204 07:17:38.978830 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-jkfns_801084d1-2568-40d3-b9a1-3f3d43cecdea/cert-manager-webhook/0.log" Dec 04 07:17:51 crc kubenswrapper[4832]: I1204 07:17:51.662894 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-jfb7v_85a05826-c1ab-484b-b658-051dc78add17/nmstate-console-plugin/0.log" Dec 04 07:17:51 crc kubenswrapper[4832]: I1204 07:17:51.827955 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4qw9l_3d1046ad-79df-4e1c-8c25-6af2a0379417/nmstate-handler/0.log" Dec 04 07:17:51 crc kubenswrapper[4832]: I1204 07:17:51.888284 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-gcvj4_a6d2dc02-8689-4c6b-bde6-f9120db9f714/kube-rbac-proxy/0.log" Dec 04 07:17:51 crc kubenswrapper[4832]: I1204 07:17:51.912789 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-gcvj4_a6d2dc02-8689-4c6b-bde6-f9120db9f714/nmstate-metrics/0.log" Dec 04 07:17:52 crc kubenswrapper[4832]: I1204 07:17:52.432560 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-6ntwk_097d6138-4a11-4545-bb6e-a61ea6cff7fb/nmstate-webhook/0.log" Dec 04 07:17:52 crc kubenswrapper[4832]: I1204 07:17:52.435328 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-rqknt_2a85a45c-df69-4030-af49-e7f2bb0b755e/nmstate-operator/0.log" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.172288 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zvb59"] Dec 04 07:18:00 crc kubenswrapper[4832]: E1204 07:18:00.174045 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680b50b2-86e3-43a1-968d-1f897c625ab1" containerName="container-00" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.174066 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="680b50b2-86e3-43a1-968d-1f897c625ab1" containerName="container-00" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.174339 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="680b50b2-86e3-43a1-968d-1f897c625ab1" containerName="container-00" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.180413 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.191301 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvb59"] Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.295263 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86wnn\" (UniqueName: \"kubernetes.io/projected/ab885015-e694-4637-843a-0296a3e45d25-kube-api-access-86wnn\") pod \"community-operators-zvb59\" (UID: \"ab885015-e694-4637-843a-0296a3e45d25\") " pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.295965 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab885015-e694-4637-843a-0296a3e45d25-utilities\") pod \"community-operators-zvb59\" (UID: \"ab885015-e694-4637-843a-0296a3e45d25\") " pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.296180 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab885015-e694-4637-843a-0296a3e45d25-catalog-content\") pod \"community-operators-zvb59\" (UID: \"ab885015-e694-4637-843a-0296a3e45d25\") " pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.399257 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86wnn\" (UniqueName: \"kubernetes.io/projected/ab885015-e694-4637-843a-0296a3e45d25-kube-api-access-86wnn\") pod \"community-operators-zvb59\" (UID: \"ab885015-e694-4637-843a-0296a3e45d25\") " pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.399475 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab885015-e694-4637-843a-0296a3e45d25-utilities\") pod \"community-operators-zvb59\" (UID: \"ab885015-e694-4637-843a-0296a3e45d25\") " pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.399540 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab885015-e694-4637-843a-0296a3e45d25-catalog-content\") pod \"community-operators-zvb59\" (UID: \"ab885015-e694-4637-843a-0296a3e45d25\") " pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.400156 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab885015-e694-4637-843a-0296a3e45d25-utilities\") pod \"community-operators-zvb59\" (UID: \"ab885015-e694-4637-843a-0296a3e45d25\") " pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.400223 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab885015-e694-4637-843a-0296a3e45d25-catalog-content\") pod \"community-operators-zvb59\" (UID: \"ab885015-e694-4637-843a-0296a3e45d25\") " pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.429347 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86wnn\" (UniqueName: \"kubernetes.io/projected/ab885015-e694-4637-843a-0296a3e45d25-kube-api-access-86wnn\") pod \"community-operators-zvb59\" (UID: \"ab885015-e694-4637-843a-0296a3e45d25\") " pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:00 crc kubenswrapper[4832]: I1204 07:18:00.537920 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:01 crc kubenswrapper[4832]: I1204 07:18:01.136602 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvb59"] Dec 04 07:18:01 crc kubenswrapper[4832]: I1204 07:18:01.696168 4832 generic.go:334] "Generic (PLEG): container finished" podID="ab885015-e694-4637-843a-0296a3e45d25" containerID="d40d3a3cc861ea4ea927387293969dcd120b6f2fff0a2d70aa0af7557a2b0534" exitCode=0 Dec 04 07:18:01 crc kubenswrapper[4832]: I1204 07:18:01.696284 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvb59" event={"ID":"ab885015-e694-4637-843a-0296a3e45d25","Type":"ContainerDied","Data":"d40d3a3cc861ea4ea927387293969dcd120b6f2fff0a2d70aa0af7557a2b0534"} Dec 04 07:18:01 crc kubenswrapper[4832]: I1204 07:18:01.696716 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvb59" event={"ID":"ab885015-e694-4637-843a-0296a3e45d25","Type":"ContainerStarted","Data":"3cc4d9c223ad4430156f2539efdf6f6436a6dd4fe25cc3c50765bb06cb11f04c"} Dec 04 07:18:01 crc kubenswrapper[4832]: I1204 07:18:01.699563 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 07:18:02 crc kubenswrapper[4832]: I1204 07:18:02.756176 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvb59" event={"ID":"ab885015-e694-4637-843a-0296a3e45d25","Type":"ContainerStarted","Data":"f088898af4a9e9f942b798ae19e91e2812bedcc1afac313067ca862b5fd1cec3"} Dec 04 07:18:03 crc kubenswrapper[4832]: I1204 07:18:03.725384 4832 generic.go:334] "Generic (PLEG): container finished" podID="ab885015-e694-4637-843a-0296a3e45d25" containerID="f088898af4a9e9f942b798ae19e91e2812bedcc1afac313067ca862b5fd1cec3" exitCode=0 Dec 04 07:18:03 crc kubenswrapper[4832]: I1204 07:18:03.725789 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvb59" event={"ID":"ab885015-e694-4637-843a-0296a3e45d25","Type":"ContainerDied","Data":"f088898af4a9e9f942b798ae19e91e2812bedcc1afac313067ca862b5fd1cec3"} Dec 04 07:18:04 crc kubenswrapper[4832]: I1204 07:18:04.740340 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvb59" event={"ID":"ab885015-e694-4637-843a-0296a3e45d25","Type":"ContainerStarted","Data":"9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376"} Dec 04 07:18:04 crc kubenswrapper[4832]: I1204 07:18:04.777944 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zvb59" podStartSLOduration=2.347907144 podStartE2EDuration="4.777915664s" podCreationTimestamp="2025-12-04 07:18:00 +0000 UTC" firstStartedPulling="2025-12-04 07:18:01.698973329 +0000 UTC m=+4137.311791055" lastFinishedPulling="2025-12-04 07:18:04.128981869 +0000 UTC m=+4139.741799575" observedRunningTime="2025-12-04 07:18:04.767859798 +0000 UTC m=+4140.380677514" watchObservedRunningTime="2025-12-04 07:18:04.777915664 +0000 UTC m=+4140.390733370" Dec 04 07:18:07 crc kubenswrapper[4832]: I1204 07:18:07.980042 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-gclzl_a1b7280c-f3d1-4f5b-9f14-bf413e597077/kube-rbac-proxy/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.090104 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-gclzl_a1b7280c-f3d1-4f5b-9f14-bf413e597077/controller/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.233121 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-frr-files/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.379363 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-frr-files/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.420003 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-reloader/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.452789 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-metrics/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.456985 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-reloader/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.640761 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-frr-files/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.660833 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-reloader/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.669502 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-metrics/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.701439 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-metrics/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.871108 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-reloader/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.874748 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-frr-files/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.889850 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/cp-metrics/0.log" Dec 04 07:18:08 crc kubenswrapper[4832]: I1204 07:18:08.926946 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/controller/0.log" Dec 04 07:18:09 crc kubenswrapper[4832]: I1204 07:18:09.078584 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/kube-rbac-proxy/0.log" Dec 04 07:18:09 crc kubenswrapper[4832]: I1204 07:18:09.086977 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/frr-metrics/0.log" Dec 04 07:18:09 crc kubenswrapper[4832]: I1204 07:18:09.157281 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/kube-rbac-proxy-frr/0.log" Dec 04 07:18:09 crc kubenswrapper[4832]: I1204 07:18:09.362986 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/reloader/0.log" Dec 04 07:18:09 crc kubenswrapper[4832]: I1204 07:18:09.661695 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-9h2b6_9e61f6af-2150-458f-9ace-ce824ac50448/frr-k8s-webhook-server/0.log" Dec 04 07:18:09 crc kubenswrapper[4832]: I1204 07:18:09.829226 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5dff6547bc-rp4gv_59a7f669-83c5-454f-a192-94642ab2fe06/manager/0.log" Dec 04 07:18:10 crc kubenswrapper[4832]: I1204 07:18:10.042703 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f45496cc4-r8fz4_6c20405b-b33f-49ad-a10f-a9b32a3d320b/webhook-server/0.log" Dec 04 07:18:10 crc kubenswrapper[4832]: I1204 07:18:10.255305 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9wd74_e4f68a6a-9df0-4ad3-bb51-b662bfb994e9/frr/0.log" Dec 04 07:18:10 crc kubenswrapper[4832]: I1204 07:18:10.260001 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cwbkx_3a0011d7-d649-42fa-bd27-b98eb4a958a3/kube-rbac-proxy/0.log" Dec 04 07:18:10 crc kubenswrapper[4832]: I1204 07:18:10.539922 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:10 crc kubenswrapper[4832]: I1204 07:18:10.539986 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:10 crc kubenswrapper[4832]: I1204 07:18:10.543888 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cwbkx_3a0011d7-d649-42fa-bd27-b98eb4a958a3/speaker/0.log" Dec 04 07:18:10 crc kubenswrapper[4832]: I1204 07:18:10.600793 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:10 crc kubenswrapper[4832]: I1204 07:18:10.838049 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:10 crc kubenswrapper[4832]: I1204 07:18:10.895864 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zvb59"] Dec 04 07:18:12 crc kubenswrapper[4832]: I1204 07:18:12.807413 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zvb59" podUID="ab885015-e694-4637-843a-0296a3e45d25" containerName="registry-server" containerID="cri-o://9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376" gracePeriod=2 Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.377468 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.523981 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab885015-e694-4637-843a-0296a3e45d25-utilities\") pod \"ab885015-e694-4637-843a-0296a3e45d25\" (UID: \"ab885015-e694-4637-843a-0296a3e45d25\") " Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.524158 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab885015-e694-4637-843a-0296a3e45d25-catalog-content\") pod \"ab885015-e694-4637-843a-0296a3e45d25\" (UID: \"ab885015-e694-4637-843a-0296a3e45d25\") " Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.524284 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86wnn\" (UniqueName: \"kubernetes.io/projected/ab885015-e694-4637-843a-0296a3e45d25-kube-api-access-86wnn\") pod \"ab885015-e694-4637-843a-0296a3e45d25\" (UID: \"ab885015-e694-4637-843a-0296a3e45d25\") " Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.525646 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab885015-e694-4637-843a-0296a3e45d25-utilities" (OuterVolumeSpecName: "utilities") pod "ab885015-e694-4637-843a-0296a3e45d25" (UID: "ab885015-e694-4637-843a-0296a3e45d25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.533907 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab885015-e694-4637-843a-0296a3e45d25-kube-api-access-86wnn" (OuterVolumeSpecName: "kube-api-access-86wnn") pod "ab885015-e694-4637-843a-0296a3e45d25" (UID: "ab885015-e694-4637-843a-0296a3e45d25"). InnerVolumeSpecName "kube-api-access-86wnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.595412 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab885015-e694-4637-843a-0296a3e45d25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab885015-e694-4637-843a-0296a3e45d25" (UID: "ab885015-e694-4637-843a-0296a3e45d25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.626326 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab885015-e694-4637-843a-0296a3e45d25-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.626370 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab885015-e694-4637-843a-0296a3e45d25-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.626382 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86wnn\" (UniqueName: \"kubernetes.io/projected/ab885015-e694-4637-843a-0296a3e45d25-kube-api-access-86wnn\") on node \"crc\" DevicePath \"\"" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.827183 4832 generic.go:334] "Generic (PLEG): container finished" podID="ab885015-e694-4637-843a-0296a3e45d25" containerID="9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376" exitCode=0 Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.827252 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvb59" event={"ID":"ab885015-e694-4637-843a-0296a3e45d25","Type":"ContainerDied","Data":"9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376"} Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.827678 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvb59" event={"ID":"ab885015-e694-4637-843a-0296a3e45d25","Type":"ContainerDied","Data":"3cc4d9c223ad4430156f2539efdf6f6436a6dd4fe25cc3c50765bb06cb11f04c"} Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.827299 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvb59" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.827717 4832 scope.go:117] "RemoveContainer" containerID="9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.857834 4832 scope.go:117] "RemoveContainer" containerID="f088898af4a9e9f942b798ae19e91e2812bedcc1afac313067ca862b5fd1cec3" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.889008 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zvb59"] Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.900916 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zvb59"] Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.917952 4832 scope.go:117] "RemoveContainer" containerID="d40d3a3cc861ea4ea927387293969dcd120b6f2fff0a2d70aa0af7557a2b0534" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.940073 4832 scope.go:117] "RemoveContainer" containerID="9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376" Dec 04 07:18:13 crc kubenswrapper[4832]: E1204 07:18:13.940569 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376\": container with ID starting with 9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376 not found: ID does not exist" containerID="9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.940623 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376"} err="failed to get container status \"9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376\": rpc error: code = NotFound desc = could not find container \"9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376\": container with ID starting with 9782be3f940daf01d55c5bab0fdfe27061571fcbd20265210884cc68393df376 not found: ID does not exist" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.940657 4832 scope.go:117] "RemoveContainer" containerID="f088898af4a9e9f942b798ae19e91e2812bedcc1afac313067ca862b5fd1cec3" Dec 04 07:18:13 crc kubenswrapper[4832]: E1204 07:18:13.943952 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f088898af4a9e9f942b798ae19e91e2812bedcc1afac313067ca862b5fd1cec3\": container with ID starting with f088898af4a9e9f942b798ae19e91e2812bedcc1afac313067ca862b5fd1cec3 not found: ID does not exist" containerID="f088898af4a9e9f942b798ae19e91e2812bedcc1afac313067ca862b5fd1cec3" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.943992 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f088898af4a9e9f942b798ae19e91e2812bedcc1afac313067ca862b5fd1cec3"} err="failed to get container status \"f088898af4a9e9f942b798ae19e91e2812bedcc1afac313067ca862b5fd1cec3\": rpc error: code = NotFound desc = could not find container \"f088898af4a9e9f942b798ae19e91e2812bedcc1afac313067ca862b5fd1cec3\": container with ID starting with f088898af4a9e9f942b798ae19e91e2812bedcc1afac313067ca862b5fd1cec3 not found: ID does not exist" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.944014 4832 scope.go:117] "RemoveContainer" containerID="d40d3a3cc861ea4ea927387293969dcd120b6f2fff0a2d70aa0af7557a2b0534" Dec 04 07:18:13 crc kubenswrapper[4832]: E1204 07:18:13.944459 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d40d3a3cc861ea4ea927387293969dcd120b6f2fff0a2d70aa0af7557a2b0534\": container with ID starting with d40d3a3cc861ea4ea927387293969dcd120b6f2fff0a2d70aa0af7557a2b0534 not found: ID does not exist" containerID="d40d3a3cc861ea4ea927387293969dcd120b6f2fff0a2d70aa0af7557a2b0534" Dec 04 07:18:13 crc kubenswrapper[4832]: I1204 07:18:13.944531 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40d3a3cc861ea4ea927387293969dcd120b6f2fff0a2d70aa0af7557a2b0534"} err="failed to get container status \"d40d3a3cc861ea4ea927387293969dcd120b6f2fff0a2d70aa0af7557a2b0534\": rpc error: code = NotFound desc = could not find container \"d40d3a3cc861ea4ea927387293969dcd120b6f2fff0a2d70aa0af7557a2b0534\": container with ID starting with d40d3a3cc861ea4ea927387293969dcd120b6f2fff0a2d70aa0af7557a2b0534 not found: ID does not exist" Dec 04 07:18:14 crc kubenswrapper[4832]: I1204 07:18:14.724502 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab885015-e694-4637-843a-0296a3e45d25" path="/var/lib/kubelet/pods/ab885015-e694-4637-843a-0296a3e45d25/volumes" Dec 04 07:18:24 crc kubenswrapper[4832]: I1204 07:18:24.358339 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/util/0.log" Dec 04 07:18:24 crc kubenswrapper[4832]: I1204 07:18:24.555189 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/util/0.log" Dec 04 07:18:24 crc kubenswrapper[4832]: I1204 07:18:24.567834 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/pull/0.log" Dec 04 07:18:24 crc kubenswrapper[4832]: I1204 07:18:24.573240 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/pull/0.log" Dec 04 07:18:24 crc kubenswrapper[4832]: I1204 07:18:24.808116 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/util/0.log" Dec 04 07:18:24 crc kubenswrapper[4832]: I1204 07:18:24.817710 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/pull/0.log" Dec 04 07:18:24 crc kubenswrapper[4832]: I1204 07:18:24.855984 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fjqls4_1076a843-3b6f-4c93-9aa4-0207c2586cbb/extract/0.log" Dec 04 07:18:24 crc kubenswrapper[4832]: I1204 07:18:24.996853 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/util/0.log" Dec 04 07:18:25 crc kubenswrapper[4832]: I1204 07:18:25.185557 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/util/0.log" Dec 04 07:18:25 crc kubenswrapper[4832]: I1204 07:18:25.186598 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/pull/0.log" Dec 04 07:18:25 crc kubenswrapper[4832]: I1204 07:18:25.221445 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/pull/0.log" Dec 04 07:18:25 crc kubenswrapper[4832]: I1204 07:18:25.369767 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/util/0.log" Dec 04 07:18:25 crc kubenswrapper[4832]: I1204 07:18:25.428483 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/pull/0.log" Dec 04 07:18:25 crc kubenswrapper[4832]: I1204 07:18:25.464220 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8328d48_d06954e0-1987-4ea1-8573-f3232b1a8e7e/extract/0.log" Dec 04 07:18:25 crc kubenswrapper[4832]: I1204 07:18:25.589355 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/extract-utilities/0.log" Dec 04 07:18:25 crc kubenswrapper[4832]: I1204 07:18:25.797151 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/extract-utilities/0.log" Dec 04 07:18:25 crc kubenswrapper[4832]: I1204 07:18:25.825744 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/extract-content/0.log" Dec 04 07:18:25 crc kubenswrapper[4832]: I1204 07:18:25.855765 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/extract-content/0.log" Dec 04 07:18:25 crc kubenswrapper[4832]: I1204 07:18:25.988971 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/extract-utilities/0.log" Dec 04 07:18:26 crc kubenswrapper[4832]: I1204 07:18:26.062223 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/extract-content/0.log" Dec 04 07:18:26 crc kubenswrapper[4832]: I1204 07:18:26.229220 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/extract-utilities/0.log" Dec 04 07:18:26 crc kubenswrapper[4832]: I1204 07:18:26.505474 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/extract-utilities/0.log" Dec 04 07:18:26 crc kubenswrapper[4832]: I1204 07:18:26.538891 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/extract-content/0.log" Dec 04 07:18:26 crc kubenswrapper[4832]: I1204 07:18:26.544995 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/extract-content/0.log" Dec 04 07:18:26 crc kubenswrapper[4832]: I1204 07:18:26.562501 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dhwr_6c6758d5-9eeb-4895-9e0e-d4364556afc0/registry-server/0.log" Dec 04 07:18:26 crc kubenswrapper[4832]: I1204 07:18:26.735497 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/extract-utilities/0.log" Dec 04 07:18:26 crc kubenswrapper[4832]: I1204 07:18:26.794690 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/extract-content/0.log" Dec 04 07:18:27 crc kubenswrapper[4832]: I1204 07:18:27.021453 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-q8xv8_d5e811d7-d4fd-4504-b6d0-8d653628465d/marketplace-operator/0.log" Dec 04 07:18:27 crc kubenswrapper[4832]: I1204 07:18:27.436564 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dl7h7_9ed7e241-4b9d-42f9-b2de-ee72694a5ba2/registry-server/0.log" Dec 04 07:18:27 crc kubenswrapper[4832]: I1204 07:18:27.833414 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/extract-utilities/0.log" Dec 04 07:18:28 crc kubenswrapper[4832]: I1204 07:18:28.038689 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/extract-content/0.log" Dec 04 07:18:28 crc kubenswrapper[4832]: I1204 07:18:28.078626 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/extract-utilities/0.log" Dec 04 07:18:28 crc kubenswrapper[4832]: I1204 07:18:28.107499 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/extract-content/0.log" Dec 04 07:18:28 crc kubenswrapper[4832]: I1204 07:18:28.244140 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/extract-utilities/0.log" Dec 04 07:18:28 crc kubenswrapper[4832]: I1204 07:18:28.275382 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/extract-content/0.log" Dec 04 07:18:28 crc kubenswrapper[4832]: I1204 07:18:28.387651 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/extract-utilities/0.log" Dec 04 07:18:28 crc kubenswrapper[4832]: I1204 07:18:28.429979 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xqwjd_4d017b88-ca36-417e-9f64-051bd0819f20/registry-server/0.log" Dec 04 07:18:28 crc kubenswrapper[4832]: I1204 07:18:28.580385 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/extract-content/0.log" Dec 04 07:18:28 crc kubenswrapper[4832]: I1204 07:18:28.587953 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/extract-utilities/0.log" Dec 04 07:18:28 crc kubenswrapper[4832]: I1204 07:18:28.606990 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/extract-content/0.log" Dec 04 07:18:28 crc kubenswrapper[4832]: I1204 07:18:28.785426 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/extract-utilities/0.log" Dec 04 07:18:28 crc kubenswrapper[4832]: I1204 07:18:28.814779 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/extract-content/0.log" Dec 04 07:18:29 crc kubenswrapper[4832]: I1204 07:18:29.424989 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7dfrh_d383066c-be25-44c6-854b-0d57c0e91e6b/registry-server/0.log" Dec 04 07:19:03 crc kubenswrapper[4832]: I1204 07:19:03.849929 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-52896"] Dec 04 07:19:03 crc kubenswrapper[4832]: E1204 07:19:03.850934 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab885015-e694-4637-843a-0296a3e45d25" containerName="registry-server" Dec 04 07:19:03 crc kubenswrapper[4832]: I1204 07:19:03.850952 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab885015-e694-4637-843a-0296a3e45d25" containerName="registry-server" Dec 04 07:19:03 crc kubenswrapper[4832]: E1204 07:19:03.850982 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab885015-e694-4637-843a-0296a3e45d25" containerName="extract-utilities" Dec 04 07:19:03 crc kubenswrapper[4832]: I1204 07:19:03.850990 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab885015-e694-4637-843a-0296a3e45d25" containerName="extract-utilities" Dec 04 07:19:03 crc kubenswrapper[4832]: E1204 07:19:03.851015 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab885015-e694-4637-843a-0296a3e45d25" containerName="extract-content" Dec 04 07:19:03 crc kubenswrapper[4832]: I1204 07:19:03.851021 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab885015-e694-4637-843a-0296a3e45d25" containerName="extract-content" Dec 04 07:19:03 crc kubenswrapper[4832]: I1204 07:19:03.851263 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab885015-e694-4637-843a-0296a3e45d25" containerName="registry-server" Dec 04 07:19:03 crc kubenswrapper[4832]: I1204 07:19:03.852899 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:03 crc kubenswrapper[4832]: I1204 07:19:03.863235 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-52896"] Dec 04 07:19:03 crc kubenswrapper[4832]: I1204 07:19:03.951926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03772d99-c1af-4289-a616-8b70aa5532bc-catalog-content\") pod \"redhat-operators-52896\" (UID: \"03772d99-c1af-4289-a616-8b70aa5532bc\") " pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:03 crc kubenswrapper[4832]: I1204 07:19:03.951990 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khj7m\" (UniqueName: \"kubernetes.io/projected/03772d99-c1af-4289-a616-8b70aa5532bc-kube-api-access-khj7m\") pod \"redhat-operators-52896\" (UID: \"03772d99-c1af-4289-a616-8b70aa5532bc\") " pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:03 crc kubenswrapper[4832]: I1204 07:19:03.952189 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03772d99-c1af-4289-a616-8b70aa5532bc-utilities\") pod \"redhat-operators-52896\" (UID: \"03772d99-c1af-4289-a616-8b70aa5532bc\") " pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:04 crc kubenswrapper[4832]: I1204 07:19:04.055212 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03772d99-c1af-4289-a616-8b70aa5532bc-utilities\") pod \"redhat-operators-52896\" (UID: \"03772d99-c1af-4289-a616-8b70aa5532bc\") " pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:04 crc kubenswrapper[4832]: I1204 07:19:04.055420 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03772d99-c1af-4289-a616-8b70aa5532bc-catalog-content\") pod \"redhat-operators-52896\" (UID: \"03772d99-c1af-4289-a616-8b70aa5532bc\") " pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:04 crc kubenswrapper[4832]: I1204 07:19:04.055457 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khj7m\" (UniqueName: \"kubernetes.io/projected/03772d99-c1af-4289-a616-8b70aa5532bc-kube-api-access-khj7m\") pod \"redhat-operators-52896\" (UID: \"03772d99-c1af-4289-a616-8b70aa5532bc\") " pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:04 crc kubenswrapper[4832]: I1204 07:19:04.055944 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03772d99-c1af-4289-a616-8b70aa5532bc-utilities\") pod \"redhat-operators-52896\" (UID: \"03772d99-c1af-4289-a616-8b70aa5532bc\") " pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:04 crc kubenswrapper[4832]: I1204 07:19:04.056168 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03772d99-c1af-4289-a616-8b70aa5532bc-catalog-content\") pod \"redhat-operators-52896\" (UID: \"03772d99-c1af-4289-a616-8b70aa5532bc\") " pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:04 crc kubenswrapper[4832]: I1204 07:19:04.559443 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khj7m\" (UniqueName: \"kubernetes.io/projected/03772d99-c1af-4289-a616-8b70aa5532bc-kube-api-access-khj7m\") pod \"redhat-operators-52896\" (UID: \"03772d99-c1af-4289-a616-8b70aa5532bc\") " pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:04 crc kubenswrapper[4832]: I1204 07:19:04.822853 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:05 crc kubenswrapper[4832]: I1204 07:19:05.362643 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 07:19:05 crc kubenswrapper[4832]: I1204 07:19:05.363205 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 07:19:05 crc kubenswrapper[4832]: I1204 07:19:05.421870 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-52896"] Dec 04 07:19:06 crc kubenswrapper[4832]: I1204 07:19:06.362603 4832 generic.go:334] "Generic (PLEG): container finished" podID="03772d99-c1af-4289-a616-8b70aa5532bc" containerID="4d8cdc919203a40d4add810ca0df0a4af3bb94345e6756a2de0e22bba614fbaf" exitCode=0 Dec 04 07:19:06 crc kubenswrapper[4832]: I1204 07:19:06.362745 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52896" event={"ID":"03772d99-c1af-4289-a616-8b70aa5532bc","Type":"ContainerDied","Data":"4d8cdc919203a40d4add810ca0df0a4af3bb94345e6756a2de0e22bba614fbaf"} Dec 04 07:19:06 crc kubenswrapper[4832]: I1204 07:19:06.363141 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52896" event={"ID":"03772d99-c1af-4289-a616-8b70aa5532bc","Type":"ContainerStarted","Data":"70a9dba9d6c1065449ef7f1a17db5c3c8263f77d8e061754a2cb3a5502cb92bc"} Dec 04 07:19:07 crc kubenswrapper[4832]: I1204 07:19:07.371884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52896" event={"ID":"03772d99-c1af-4289-a616-8b70aa5532bc","Type":"ContainerStarted","Data":"1de4a54bdce59ea3700165438491c5189d8906f00ff0060afec41a71c417e06a"} Dec 04 07:19:08 crc kubenswrapper[4832]: I1204 07:19:08.384570 4832 generic.go:334] "Generic (PLEG): container finished" podID="03772d99-c1af-4289-a616-8b70aa5532bc" containerID="1de4a54bdce59ea3700165438491c5189d8906f00ff0060afec41a71c417e06a" exitCode=0 Dec 04 07:19:08 crc kubenswrapper[4832]: I1204 07:19:08.384669 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52896" event={"ID":"03772d99-c1af-4289-a616-8b70aa5532bc","Type":"ContainerDied","Data":"1de4a54bdce59ea3700165438491c5189d8906f00ff0060afec41a71c417e06a"} Dec 04 07:19:09 crc kubenswrapper[4832]: I1204 07:19:09.396610 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52896" event={"ID":"03772d99-c1af-4289-a616-8b70aa5532bc","Type":"ContainerStarted","Data":"5f89fed3f8a23357d8068ee03152e38d4130c7fa4aa3bb08488a2ce3cbfaf232"} Dec 04 07:19:09 crc kubenswrapper[4832]: I1204 07:19:09.424478 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-52896" podStartSLOduration=3.916794134 podStartE2EDuration="6.424450443s" podCreationTimestamp="2025-12-04 07:19:03 +0000 UTC" firstStartedPulling="2025-12-04 07:19:06.364695218 +0000 UTC m=+4201.977512924" lastFinishedPulling="2025-12-04 07:19:08.872351527 +0000 UTC m=+4204.485169233" observedRunningTime="2025-12-04 07:19:09.419756869 +0000 UTC m=+4205.032574585" watchObservedRunningTime="2025-12-04 07:19:09.424450443 +0000 UTC m=+4205.037268159" Dec 04 07:19:13 crc kubenswrapper[4832]: I1204 07:19:13.692371 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x4krb"] Dec 04 07:19:13 crc kubenswrapper[4832]: I1204 07:19:13.695851 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:13 crc kubenswrapper[4832]: I1204 07:19:13.724989 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4krb"] Dec 04 07:19:13 crc kubenswrapper[4832]: I1204 07:19:13.784406 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511397be-0233-4cc1-8780-3b2adafc2008-utilities\") pod \"redhat-marketplace-x4krb\" (UID: \"511397be-0233-4cc1-8780-3b2adafc2008\") " pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:13 crc kubenswrapper[4832]: I1204 07:19:13.784598 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511397be-0233-4cc1-8780-3b2adafc2008-catalog-content\") pod \"redhat-marketplace-x4krb\" (UID: \"511397be-0233-4cc1-8780-3b2adafc2008\") " pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:13 crc kubenswrapper[4832]: I1204 07:19:13.784913 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4hdv\" (UniqueName: \"kubernetes.io/projected/511397be-0233-4cc1-8780-3b2adafc2008-kube-api-access-f4hdv\") pod \"redhat-marketplace-x4krb\" (UID: \"511397be-0233-4cc1-8780-3b2adafc2008\") " pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:13 crc kubenswrapper[4832]: I1204 07:19:13.887729 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511397be-0233-4cc1-8780-3b2adafc2008-utilities\") pod \"redhat-marketplace-x4krb\" (UID: \"511397be-0233-4cc1-8780-3b2adafc2008\") " pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:13 crc kubenswrapper[4832]: I1204 07:19:13.887873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511397be-0233-4cc1-8780-3b2adafc2008-catalog-content\") pod \"redhat-marketplace-x4krb\" (UID: \"511397be-0233-4cc1-8780-3b2adafc2008\") " pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:13 crc kubenswrapper[4832]: I1204 07:19:13.887943 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4hdv\" (UniqueName: \"kubernetes.io/projected/511397be-0233-4cc1-8780-3b2adafc2008-kube-api-access-f4hdv\") pod \"redhat-marketplace-x4krb\" (UID: \"511397be-0233-4cc1-8780-3b2adafc2008\") " pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:13 crc kubenswrapper[4832]: I1204 07:19:13.888414 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511397be-0233-4cc1-8780-3b2adafc2008-utilities\") pod \"redhat-marketplace-x4krb\" (UID: \"511397be-0233-4cc1-8780-3b2adafc2008\") " pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:13 crc kubenswrapper[4832]: I1204 07:19:13.889565 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511397be-0233-4cc1-8780-3b2adafc2008-catalog-content\") pod \"redhat-marketplace-x4krb\" (UID: \"511397be-0233-4cc1-8780-3b2adafc2008\") " pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:14 crc kubenswrapper[4832]: I1204 07:19:14.557834 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4hdv\" (UniqueName: \"kubernetes.io/projected/511397be-0233-4cc1-8780-3b2adafc2008-kube-api-access-f4hdv\") pod \"redhat-marketplace-x4krb\" (UID: \"511397be-0233-4cc1-8780-3b2adafc2008\") " pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:14 crc kubenswrapper[4832]: I1204 07:19:14.622787 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:14 crc kubenswrapper[4832]: I1204 07:19:14.823184 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:14 crc kubenswrapper[4832]: I1204 07:19:14.823956 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:14 crc kubenswrapper[4832]: I1204 07:19:14.890093 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:15 crc kubenswrapper[4832]: I1204 07:19:15.133531 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4krb"] Dec 04 07:19:15 crc kubenswrapper[4832]: I1204 07:19:15.453111 4832 generic.go:334] "Generic (PLEG): container finished" podID="511397be-0233-4cc1-8780-3b2adafc2008" containerID="39c678301e07843fbce6dd87bdaf9cf33a487edc6766c4813b87885d63737c9d" exitCode=0 Dec 04 07:19:15 crc kubenswrapper[4832]: I1204 07:19:15.453210 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4krb" event={"ID":"511397be-0233-4cc1-8780-3b2adafc2008","Type":"ContainerDied","Data":"39c678301e07843fbce6dd87bdaf9cf33a487edc6766c4813b87885d63737c9d"} Dec 04 07:19:15 crc kubenswrapper[4832]: I1204 07:19:15.453761 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4krb" event={"ID":"511397be-0233-4cc1-8780-3b2adafc2008","Type":"ContainerStarted","Data":"726424029fd7a970b588cd217a97087f4aecd95e43464173f0f29e13f0a1a663"} Dec 04 07:19:15 crc kubenswrapper[4832]: I1204 07:19:15.506845 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:16 crc kubenswrapper[4832]: I1204 07:19:16.856877 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-52896"] Dec 04 07:19:17 crc kubenswrapper[4832]: I1204 07:19:17.477105 4832 generic.go:334] "Generic (PLEG): container finished" podID="511397be-0233-4cc1-8780-3b2adafc2008" containerID="25f7f547cc2452afe2ac802c6d998821bd5571cdff915a2cea07251bd3c4dd49" exitCode=0 Dec 04 07:19:17 crc kubenswrapper[4832]: I1204 07:19:17.477224 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4krb" event={"ID":"511397be-0233-4cc1-8780-3b2adafc2008","Type":"ContainerDied","Data":"25f7f547cc2452afe2ac802c6d998821bd5571cdff915a2cea07251bd3c4dd49"} Dec 04 07:19:17 crc kubenswrapper[4832]: I1204 07:19:17.477849 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-52896" podUID="03772d99-c1af-4289-a616-8b70aa5532bc" containerName="registry-server" containerID="cri-o://5f89fed3f8a23357d8068ee03152e38d4130c7fa4aa3bb08488a2ce3cbfaf232" gracePeriod=2 Dec 04 07:19:19 crc kubenswrapper[4832]: I1204 07:19:19.497688 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4krb" event={"ID":"511397be-0233-4cc1-8780-3b2adafc2008","Type":"ContainerStarted","Data":"d31829f6a733e04fe97c12cfa47583377c9228ccc5a65c105bcd5c3a102f2add"} Dec 04 07:19:19 crc kubenswrapper[4832]: I1204 07:19:19.500810 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52896" event={"ID":"03772d99-c1af-4289-a616-8b70aa5532bc","Type":"ContainerDied","Data":"5f89fed3f8a23357d8068ee03152e38d4130c7fa4aa3bb08488a2ce3cbfaf232"} Dec 04 07:19:19 crc kubenswrapper[4832]: I1204 07:19:19.500759 4832 generic.go:334] "Generic (PLEG): container finished" podID="03772d99-c1af-4289-a616-8b70aa5532bc" containerID="5f89fed3f8a23357d8068ee03152e38d4130c7fa4aa3bb08488a2ce3cbfaf232" exitCode=0 Dec 04 07:19:19 crc kubenswrapper[4832]: I1204 07:19:19.535961 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x4krb" podStartSLOduration=2.776038952 podStartE2EDuration="6.535937196s" podCreationTimestamp="2025-12-04 07:19:13 +0000 UTC" firstStartedPulling="2025-12-04 07:19:15.455686745 +0000 UTC m=+4211.068504451" lastFinishedPulling="2025-12-04 07:19:19.215584989 +0000 UTC m=+4214.828402695" observedRunningTime="2025-12-04 07:19:19.532160215 +0000 UTC m=+4215.144977921" watchObservedRunningTime="2025-12-04 07:19:19.535937196 +0000 UTC m=+4215.148754912" Dec 04 07:19:19 crc kubenswrapper[4832]: I1204 07:19:19.865623 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:19 crc kubenswrapper[4832]: I1204 07:19:19.962955 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03772d99-c1af-4289-a616-8b70aa5532bc-utilities\") pod \"03772d99-c1af-4289-a616-8b70aa5532bc\" (UID: \"03772d99-c1af-4289-a616-8b70aa5532bc\") " Dec 04 07:19:19 crc kubenswrapper[4832]: I1204 07:19:19.963008 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03772d99-c1af-4289-a616-8b70aa5532bc-catalog-content\") pod \"03772d99-c1af-4289-a616-8b70aa5532bc\" (UID: \"03772d99-c1af-4289-a616-8b70aa5532bc\") " Dec 04 07:19:19 crc kubenswrapper[4832]: I1204 07:19:19.963079 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khj7m\" (UniqueName: \"kubernetes.io/projected/03772d99-c1af-4289-a616-8b70aa5532bc-kube-api-access-khj7m\") pod \"03772d99-c1af-4289-a616-8b70aa5532bc\" (UID: \"03772d99-c1af-4289-a616-8b70aa5532bc\") " Dec 04 07:19:19 crc kubenswrapper[4832]: I1204 07:19:19.964139 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03772d99-c1af-4289-a616-8b70aa5532bc-utilities" (OuterVolumeSpecName: "utilities") pod "03772d99-c1af-4289-a616-8b70aa5532bc" (UID: "03772d99-c1af-4289-a616-8b70aa5532bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:19:19 crc kubenswrapper[4832]: I1204 07:19:19.972590 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03772d99-c1af-4289-a616-8b70aa5532bc-kube-api-access-khj7m" (OuterVolumeSpecName: "kube-api-access-khj7m") pod "03772d99-c1af-4289-a616-8b70aa5532bc" (UID: "03772d99-c1af-4289-a616-8b70aa5532bc"). InnerVolumeSpecName "kube-api-access-khj7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:19:20 crc kubenswrapper[4832]: I1204 07:19:20.065741 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03772d99-c1af-4289-a616-8b70aa5532bc-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 07:19:20 crc kubenswrapper[4832]: I1204 07:19:20.065792 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khj7m\" (UniqueName: \"kubernetes.io/projected/03772d99-c1af-4289-a616-8b70aa5532bc-kube-api-access-khj7m\") on node \"crc\" DevicePath \"\"" Dec 04 07:19:20 crc kubenswrapper[4832]: I1204 07:19:20.086243 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03772d99-c1af-4289-a616-8b70aa5532bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03772d99-c1af-4289-a616-8b70aa5532bc" (UID: "03772d99-c1af-4289-a616-8b70aa5532bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:19:20 crc kubenswrapper[4832]: I1204 07:19:20.167247 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03772d99-c1af-4289-a616-8b70aa5532bc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 07:19:20 crc kubenswrapper[4832]: I1204 07:19:20.525011 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52896" event={"ID":"03772d99-c1af-4289-a616-8b70aa5532bc","Type":"ContainerDied","Data":"70a9dba9d6c1065449ef7f1a17db5c3c8263f77d8e061754a2cb3a5502cb92bc"} Dec 04 07:19:20 crc kubenswrapper[4832]: I1204 07:19:20.525105 4832 scope.go:117] "RemoveContainer" containerID="5f89fed3f8a23357d8068ee03152e38d4130c7fa4aa3bb08488a2ce3cbfaf232" Dec 04 07:19:20 crc kubenswrapper[4832]: I1204 07:19:20.525207 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-52896" Dec 04 07:19:20 crc kubenswrapper[4832]: I1204 07:19:20.557728 4832 scope.go:117] "RemoveContainer" containerID="1de4a54bdce59ea3700165438491c5189d8906f00ff0060afec41a71c417e06a" Dec 04 07:19:20 crc kubenswrapper[4832]: I1204 07:19:20.591730 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-52896"] Dec 04 07:19:20 crc kubenswrapper[4832]: I1204 07:19:20.600260 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-52896"] Dec 04 07:19:20 crc kubenswrapper[4832]: I1204 07:19:20.608632 4832 scope.go:117] "RemoveContainer" containerID="4d8cdc919203a40d4add810ca0df0a4af3bb94345e6756a2de0e22bba614fbaf" Dec 04 07:19:20 crc kubenswrapper[4832]: I1204 07:19:20.726669 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03772d99-c1af-4289-a616-8b70aa5532bc" path="/var/lib/kubelet/pods/03772d99-c1af-4289-a616-8b70aa5532bc/volumes" Dec 04 07:19:24 crc kubenswrapper[4832]: I1204 07:19:24.623477 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:24 crc kubenswrapper[4832]: I1204 07:19:24.624043 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:24 crc kubenswrapper[4832]: I1204 07:19:24.707663 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:25 crc kubenswrapper[4832]: I1204 07:19:25.661597 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:25 crc kubenswrapper[4832]: I1204 07:19:25.718428 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4krb"] Dec 04 07:19:27 crc kubenswrapper[4832]: I1204 07:19:27.614866 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x4krb" podUID="511397be-0233-4cc1-8780-3b2adafc2008" containerName="registry-server" containerID="cri-o://d31829f6a733e04fe97c12cfa47583377c9228ccc5a65c105bcd5c3a102f2add" gracePeriod=2 Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.627374 4832 generic.go:334] "Generic (PLEG): container finished" podID="511397be-0233-4cc1-8780-3b2adafc2008" containerID="d31829f6a733e04fe97c12cfa47583377c9228ccc5a65c105bcd5c3a102f2add" exitCode=0 Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.627479 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4krb" event={"ID":"511397be-0233-4cc1-8780-3b2adafc2008","Type":"ContainerDied","Data":"d31829f6a733e04fe97c12cfa47583377c9228ccc5a65c105bcd5c3a102f2add"} Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.628113 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4krb" event={"ID":"511397be-0233-4cc1-8780-3b2adafc2008","Type":"ContainerDied","Data":"726424029fd7a970b588cd217a97087f4aecd95e43464173f0f29e13f0a1a663"} Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.628149 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="726424029fd7a970b588cd217a97087f4aecd95e43464173f0f29e13f0a1a663" Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.778364 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.863725 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511397be-0233-4cc1-8780-3b2adafc2008-utilities\") pod \"511397be-0233-4cc1-8780-3b2adafc2008\" (UID: \"511397be-0233-4cc1-8780-3b2adafc2008\") " Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.863948 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4hdv\" (UniqueName: \"kubernetes.io/projected/511397be-0233-4cc1-8780-3b2adafc2008-kube-api-access-f4hdv\") pod \"511397be-0233-4cc1-8780-3b2adafc2008\" (UID: \"511397be-0233-4cc1-8780-3b2adafc2008\") " Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.864074 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511397be-0233-4cc1-8780-3b2adafc2008-catalog-content\") pod \"511397be-0233-4cc1-8780-3b2adafc2008\" (UID: \"511397be-0233-4cc1-8780-3b2adafc2008\") " Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.865143 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511397be-0233-4cc1-8780-3b2adafc2008-utilities" (OuterVolumeSpecName: "utilities") pod "511397be-0233-4cc1-8780-3b2adafc2008" (UID: "511397be-0233-4cc1-8780-3b2adafc2008"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.866846 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/511397be-0233-4cc1-8780-3b2adafc2008-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.874044 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511397be-0233-4cc1-8780-3b2adafc2008-kube-api-access-f4hdv" (OuterVolumeSpecName: "kube-api-access-f4hdv") pod "511397be-0233-4cc1-8780-3b2adafc2008" (UID: "511397be-0233-4cc1-8780-3b2adafc2008"). InnerVolumeSpecName "kube-api-access-f4hdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.882952 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511397be-0233-4cc1-8780-3b2adafc2008-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "511397be-0233-4cc1-8780-3b2adafc2008" (UID: "511397be-0233-4cc1-8780-3b2adafc2008"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.969499 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4hdv\" (UniqueName: \"kubernetes.io/projected/511397be-0233-4cc1-8780-3b2adafc2008-kube-api-access-f4hdv\") on node \"crc\" DevicePath \"\"" Dec 04 07:19:28 crc kubenswrapper[4832]: I1204 07:19:28.969544 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/511397be-0233-4cc1-8780-3b2adafc2008-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 07:19:29 crc kubenswrapper[4832]: I1204 07:19:29.636864 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x4krb" Dec 04 07:19:29 crc kubenswrapper[4832]: I1204 07:19:29.682637 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4krb"] Dec 04 07:19:29 crc kubenswrapper[4832]: I1204 07:19:29.694476 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4krb"] Dec 04 07:19:30 crc kubenswrapper[4832]: I1204 07:19:30.721075 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="511397be-0233-4cc1-8780-3b2adafc2008" path="/var/lib/kubelet/pods/511397be-0233-4cc1-8780-3b2adafc2008/volumes" Dec 04 07:19:35 crc kubenswrapper[4832]: I1204 07:19:35.363090 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 07:19:35 crc kubenswrapper[4832]: I1204 07:19:35.363792 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 07:20:05 crc kubenswrapper[4832]: I1204 07:20:05.363065 4832 patch_prober.go:28] interesting pod/machine-config-daemon-jl6q4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 07:20:05 crc kubenswrapper[4832]: I1204 07:20:05.363992 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 07:20:05 crc kubenswrapper[4832]: I1204 07:20:05.364073 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" Dec 04 07:20:05 crc kubenswrapper[4832]: I1204 07:20:05.365435 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830"} pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 07:20:05 crc kubenswrapper[4832]: I1204 07:20:05.365518 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerName="machine-config-daemon" containerID="cri-o://61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" gracePeriod=600 Dec 04 07:20:05 crc kubenswrapper[4832]: E1204 07:20:05.490713 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:20:06 crc kubenswrapper[4832]: I1204 07:20:05.999821 4832 generic.go:334] "Generic (PLEG): container finished" podID="4079cbc8-9860-412d-8bb8-37713e677d1c" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" exitCode=0 Dec 04 07:20:06 crc kubenswrapper[4832]: I1204 07:20:06.000229 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" event={"ID":"4079cbc8-9860-412d-8bb8-37713e677d1c","Type":"ContainerDied","Data":"61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830"} Dec 04 07:20:06 crc kubenswrapper[4832]: I1204 07:20:06.000283 4832 scope.go:117] "RemoveContainer" containerID="3fa00dd8f0c116624370f95fff5190bd09266270dbd554276712c48bf66ab985" Dec 04 07:20:06 crc kubenswrapper[4832]: I1204 07:20:06.001062 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:20:06 crc kubenswrapper[4832]: E1204 07:20:06.001351 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:20:18 crc kubenswrapper[4832]: I1204 07:20:18.715470 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:20:18 crc kubenswrapper[4832]: E1204 07:20:18.716178 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:20:25 crc kubenswrapper[4832]: I1204 07:20:25.211678 4832 generic.go:334] "Generic (PLEG): container finished" podID="3bf8cdd4-dc37-4fdf-97c7-ab5779457b84" containerID="72f9025f6984752433daf4e879bc3fadd9e9eedb37ca1bc978214ec5b53d19b4" exitCode=0 Dec 04 07:20:25 crc kubenswrapper[4832]: I1204 07:20:25.212081 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkrx7/must-gather-6ctsd" event={"ID":"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84","Type":"ContainerDied","Data":"72f9025f6984752433daf4e879bc3fadd9e9eedb37ca1bc978214ec5b53d19b4"} Dec 04 07:20:25 crc kubenswrapper[4832]: I1204 07:20:25.214538 4832 scope.go:117] "RemoveContainer" containerID="72f9025f6984752433daf4e879bc3fadd9e9eedb37ca1bc978214ec5b53d19b4" Dec 04 07:20:25 crc kubenswrapper[4832]: I1204 07:20:25.922423 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lkrx7_must-gather-6ctsd_3bf8cdd4-dc37-4fdf-97c7-ab5779457b84/gather/0.log" Dec 04 07:20:30 crc kubenswrapper[4832]: I1204 07:20:30.711381 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:20:30 crc kubenswrapper[4832]: E1204 07:20:30.712542 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:20:36 crc kubenswrapper[4832]: I1204 07:20:36.650539 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lkrx7/must-gather-6ctsd"] Dec 04 07:20:36 crc kubenswrapper[4832]: I1204 07:20:36.651724 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lkrx7/must-gather-6ctsd" podUID="3bf8cdd4-dc37-4fdf-97c7-ab5779457b84" containerName="copy" containerID="cri-o://8e055f19ba610d0ee04d93694e423843276e6971fc105c371f0c69807f78852e" gracePeriod=2 Dec 04 07:20:36 crc kubenswrapper[4832]: I1204 07:20:36.659170 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lkrx7/must-gather-6ctsd"] Dec 04 07:20:37 crc kubenswrapper[4832]: I1204 07:20:37.338047 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lkrx7_must-gather-6ctsd_3bf8cdd4-dc37-4fdf-97c7-ab5779457b84/copy/0.log" Dec 04 07:20:37 crc kubenswrapper[4832]: I1204 07:20:37.339269 4832 generic.go:334] "Generic (PLEG): container finished" podID="3bf8cdd4-dc37-4fdf-97c7-ab5779457b84" containerID="8e055f19ba610d0ee04d93694e423843276e6971fc105c371f0c69807f78852e" exitCode=143 Dec 04 07:20:37 crc kubenswrapper[4832]: I1204 07:20:37.677572 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lkrx7_must-gather-6ctsd_3bf8cdd4-dc37-4fdf-97c7-ab5779457b84/copy/0.log" Dec 04 07:20:37 crc kubenswrapper[4832]: I1204 07:20:37.678141 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/must-gather-6ctsd" Dec 04 07:20:37 crc kubenswrapper[4832]: I1204 07:20:37.817973 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz77c\" (UniqueName: \"kubernetes.io/projected/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84-kube-api-access-gz77c\") pod \"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84\" (UID: \"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84\") " Dec 04 07:20:37 crc kubenswrapper[4832]: I1204 07:20:37.818560 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84-must-gather-output\") pod \"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84\" (UID: \"3bf8cdd4-dc37-4fdf-97c7-ab5779457b84\") " Dec 04 07:20:37 crc kubenswrapper[4832]: I1204 07:20:37.824143 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84-kube-api-access-gz77c" (OuterVolumeSpecName: "kube-api-access-gz77c") pod "3bf8cdd4-dc37-4fdf-97c7-ab5779457b84" (UID: "3bf8cdd4-dc37-4fdf-97c7-ab5779457b84"). InnerVolumeSpecName "kube-api-access-gz77c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:20:37 crc kubenswrapper[4832]: I1204 07:20:37.921450 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz77c\" (UniqueName: \"kubernetes.io/projected/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84-kube-api-access-gz77c\") on node \"crc\" DevicePath \"\"" Dec 04 07:20:37 crc kubenswrapper[4832]: I1204 07:20:37.971704 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3bf8cdd4-dc37-4fdf-97c7-ab5779457b84" (UID: "3bf8cdd4-dc37-4fdf-97c7-ab5779457b84"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:20:38 crc kubenswrapper[4832]: I1204 07:20:38.023162 4832 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 04 07:20:38 crc kubenswrapper[4832]: I1204 07:20:38.352914 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lkrx7_must-gather-6ctsd_3bf8cdd4-dc37-4fdf-97c7-ab5779457b84/copy/0.log" Dec 04 07:20:38 crc kubenswrapper[4832]: I1204 07:20:38.353352 4832 scope.go:117] "RemoveContainer" containerID="8e055f19ba610d0ee04d93694e423843276e6971fc105c371f0c69807f78852e" Dec 04 07:20:38 crc kubenswrapper[4832]: I1204 07:20:38.353451 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkrx7/must-gather-6ctsd" Dec 04 07:20:38 crc kubenswrapper[4832]: I1204 07:20:38.378168 4832 scope.go:117] "RemoveContainer" containerID="72f9025f6984752433daf4e879bc3fadd9e9eedb37ca1bc978214ec5b53d19b4" Dec 04 07:20:38 crc kubenswrapper[4832]: I1204 07:20:38.722482 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf8cdd4-dc37-4fdf-97c7-ab5779457b84" path="/var/lib/kubelet/pods/3bf8cdd4-dc37-4fdf-97c7-ab5779457b84/volumes" Dec 04 07:20:41 crc kubenswrapper[4832]: I1204 07:20:41.711420 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:20:41 crc kubenswrapper[4832]: E1204 07:20:41.712409 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:20:52 crc kubenswrapper[4832]: I1204 07:20:52.710713 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:20:52 crc kubenswrapper[4832]: E1204 07:20:52.711482 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:21:04 crc kubenswrapper[4832]: I1204 07:21:04.722666 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:21:04 crc kubenswrapper[4832]: E1204 07:21:04.724587 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:21:17 crc kubenswrapper[4832]: I1204 07:21:17.711370 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:21:17 crc kubenswrapper[4832]: E1204 07:21:17.712239 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:21:22 crc kubenswrapper[4832]: I1204 07:21:22.697214 4832 scope.go:117] "RemoveContainer" containerID="c51d5525d4765ddb08828b39aaf1961a1af91f61796ee536727ca53cdca5b7c5" Dec 04 07:21:30 crc kubenswrapper[4832]: I1204 07:21:30.712207 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:21:30 crc kubenswrapper[4832]: E1204 07:21:30.712994 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.775516 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mpp8v"] Dec 04 07:21:32 crc kubenswrapper[4832]: E1204 07:21:32.776401 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511397be-0233-4cc1-8780-3b2adafc2008" containerName="registry-server" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.776419 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="511397be-0233-4cc1-8780-3b2adafc2008" containerName="registry-server" Dec 04 07:21:32 crc kubenswrapper[4832]: E1204 07:21:32.776432 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf8cdd4-dc37-4fdf-97c7-ab5779457b84" containerName="copy" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.776439 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf8cdd4-dc37-4fdf-97c7-ab5779457b84" containerName="copy" Dec 04 07:21:32 crc kubenswrapper[4832]: E1204 07:21:32.776453 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511397be-0233-4cc1-8780-3b2adafc2008" containerName="extract-content" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.776459 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="511397be-0233-4cc1-8780-3b2adafc2008" containerName="extract-content" Dec 04 07:21:32 crc kubenswrapper[4832]: E1204 07:21:32.776475 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf8cdd4-dc37-4fdf-97c7-ab5779457b84" containerName="gather" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.776481 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf8cdd4-dc37-4fdf-97c7-ab5779457b84" containerName="gather" Dec 04 07:21:32 crc kubenswrapper[4832]: E1204 07:21:32.776497 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03772d99-c1af-4289-a616-8b70aa5532bc" containerName="registry-server" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.776503 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="03772d99-c1af-4289-a616-8b70aa5532bc" containerName="registry-server" Dec 04 07:21:32 crc kubenswrapper[4832]: E1204 07:21:32.776517 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03772d99-c1af-4289-a616-8b70aa5532bc" containerName="extract-utilities" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.776523 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="03772d99-c1af-4289-a616-8b70aa5532bc" containerName="extract-utilities" Dec 04 07:21:32 crc kubenswrapper[4832]: E1204 07:21:32.776536 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511397be-0233-4cc1-8780-3b2adafc2008" containerName="extract-utilities" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.776542 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="511397be-0233-4cc1-8780-3b2adafc2008" containerName="extract-utilities" Dec 04 07:21:32 crc kubenswrapper[4832]: E1204 07:21:32.776548 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03772d99-c1af-4289-a616-8b70aa5532bc" containerName="extract-content" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.776554 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="03772d99-c1af-4289-a616-8b70aa5532bc" containerName="extract-content" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.776731 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="511397be-0233-4cc1-8780-3b2adafc2008" containerName="registry-server" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.776744 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf8cdd4-dc37-4fdf-97c7-ab5779457b84" containerName="gather" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.776760 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf8cdd4-dc37-4fdf-97c7-ab5779457b84" containerName="copy" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.776771 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="03772d99-c1af-4289-a616-8b70aa5532bc" containerName="registry-server" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.778377 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.789927 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpp8v"] Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.913550 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6zp4\" (UniqueName: \"kubernetes.io/projected/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-kube-api-access-b6zp4\") pod \"certified-operators-mpp8v\" (UID: \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\") " pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.913615 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-catalog-content\") pod \"certified-operators-mpp8v\" (UID: \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\") " pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:32 crc kubenswrapper[4832]: I1204 07:21:32.913654 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-utilities\") pod \"certified-operators-mpp8v\" (UID: \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\") " pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:33 crc kubenswrapper[4832]: I1204 07:21:33.015711 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6zp4\" (UniqueName: \"kubernetes.io/projected/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-kube-api-access-b6zp4\") pod \"certified-operators-mpp8v\" (UID: \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\") " pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:33 crc kubenswrapper[4832]: I1204 07:21:33.015795 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-catalog-content\") pod \"certified-operators-mpp8v\" (UID: \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\") " pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:33 crc kubenswrapper[4832]: I1204 07:21:33.015851 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-utilities\") pod \"certified-operators-mpp8v\" (UID: \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\") " pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:33 crc kubenswrapper[4832]: I1204 07:21:33.016807 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-catalog-content\") pod \"certified-operators-mpp8v\" (UID: \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\") " pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:33 crc kubenswrapper[4832]: I1204 07:21:33.016828 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-utilities\") pod \"certified-operators-mpp8v\" (UID: \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\") " pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:33 crc kubenswrapper[4832]: I1204 07:21:33.039578 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6zp4\" (UniqueName: \"kubernetes.io/projected/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-kube-api-access-b6zp4\") pod \"certified-operators-mpp8v\" (UID: \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\") " pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:33 crc kubenswrapper[4832]: I1204 07:21:33.132003 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:33 crc kubenswrapper[4832]: I1204 07:21:33.686591 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpp8v"] Dec 04 07:21:33 crc kubenswrapper[4832]: I1204 07:21:33.944693 4832 generic.go:334] "Generic (PLEG): container finished" podID="ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714" containerID="3a9a857cc43d3a0de188d3e227d1f148ba77248324251df1ae2874498bc1dc24" exitCode=0 Dec 04 07:21:33 crc kubenswrapper[4832]: I1204 07:21:33.944757 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpp8v" event={"ID":"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714","Type":"ContainerDied","Data":"3a9a857cc43d3a0de188d3e227d1f148ba77248324251df1ae2874498bc1dc24"} Dec 04 07:21:33 crc kubenswrapper[4832]: I1204 07:21:33.944799 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpp8v" event={"ID":"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714","Type":"ContainerStarted","Data":"e15528e5c82745e602f3272191849a8e411f60e6375da9d9f5e567c099898643"} Dec 04 07:21:34 crc kubenswrapper[4832]: I1204 07:21:34.972596 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpp8v" event={"ID":"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714","Type":"ContainerStarted","Data":"aca1e6b76ee753acbb89ffba6aaaa67578f0827137cc98979c412e7fbc97433b"} Dec 04 07:21:35 crc kubenswrapper[4832]: I1204 07:21:35.984161 4832 generic.go:334] "Generic (PLEG): container finished" podID="ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714" containerID="aca1e6b76ee753acbb89ffba6aaaa67578f0827137cc98979c412e7fbc97433b" exitCode=0 Dec 04 07:21:35 crc kubenswrapper[4832]: I1204 07:21:35.984244 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpp8v" event={"ID":"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714","Type":"ContainerDied","Data":"aca1e6b76ee753acbb89ffba6aaaa67578f0827137cc98979c412e7fbc97433b"} Dec 04 07:21:36 crc kubenswrapper[4832]: I1204 07:21:36.997070 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpp8v" event={"ID":"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714","Type":"ContainerStarted","Data":"9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb"} Dec 04 07:21:37 crc kubenswrapper[4832]: I1204 07:21:37.025233 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mpp8v" podStartSLOduration=2.554227559 podStartE2EDuration="5.025210116s" podCreationTimestamp="2025-12-04 07:21:32 +0000 UTC" firstStartedPulling="2025-12-04 07:21:33.946426786 +0000 UTC m=+4349.559244492" lastFinishedPulling="2025-12-04 07:21:36.417409343 +0000 UTC m=+4352.030227049" observedRunningTime="2025-12-04 07:21:37.023004472 +0000 UTC m=+4352.635822258" watchObservedRunningTime="2025-12-04 07:21:37.025210116 +0000 UTC m=+4352.638027822" Dec 04 07:21:43 crc kubenswrapper[4832]: I1204 07:21:43.134070 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:43 crc kubenswrapper[4832]: I1204 07:21:43.134853 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:43 crc kubenswrapper[4832]: I1204 07:21:43.194451 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:44 crc kubenswrapper[4832]: I1204 07:21:44.126281 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:44 crc kubenswrapper[4832]: I1204 07:21:44.198196 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mpp8v"] Dec 04 07:21:44 crc kubenswrapper[4832]: I1204 07:21:44.711411 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:21:44 crc kubenswrapper[4832]: E1204 07:21:44.711827 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:21:46 crc kubenswrapper[4832]: I1204 07:21:46.084962 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mpp8v" podUID="ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714" containerName="registry-server" containerID="cri-o://9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb" gracePeriod=2 Dec 04 07:21:46 crc kubenswrapper[4832]: I1204 07:21:46.602842 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:46 crc kubenswrapper[4832]: I1204 07:21:46.619236 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6zp4\" (UniqueName: \"kubernetes.io/projected/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-kube-api-access-b6zp4\") pod \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\" (UID: \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\") " Dec 04 07:21:46 crc kubenswrapper[4832]: I1204 07:21:46.619417 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-utilities\") pod \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\" (UID: \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\") " Dec 04 07:21:46 crc kubenswrapper[4832]: I1204 07:21:46.619556 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-catalog-content\") pod \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\" (UID: \"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714\") " Dec 04 07:21:46 crc kubenswrapper[4832]: I1204 07:21:46.620870 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-utilities" (OuterVolumeSpecName: "utilities") pod "ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714" (UID: "ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:21:46 crc kubenswrapper[4832]: I1204 07:21:46.637498 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-kube-api-access-b6zp4" (OuterVolumeSpecName: "kube-api-access-b6zp4") pod "ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714" (UID: "ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714"). InnerVolumeSpecName "kube-api-access-b6zp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 07:21:46 crc kubenswrapper[4832]: I1204 07:21:46.684345 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714" (UID: "ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 07:21:46 crc kubenswrapper[4832]: I1204 07:21:46.722169 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6zp4\" (UniqueName: \"kubernetes.io/projected/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-kube-api-access-b6zp4\") on node \"crc\" DevicePath \"\"" Dec 04 07:21:46 crc kubenswrapper[4832]: I1204 07:21:46.722867 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 07:21:46 crc kubenswrapper[4832]: I1204 07:21:46.722886 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.100373 4832 generic.go:334] "Generic (PLEG): container finished" podID="ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714" containerID="9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb" exitCode=0 Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.100529 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpp8v" event={"ID":"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714","Type":"ContainerDied","Data":"9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb"} Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.100568 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpp8v" Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.100611 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpp8v" event={"ID":"ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714","Type":"ContainerDied","Data":"e15528e5c82745e602f3272191849a8e411f60e6375da9d9f5e567c099898643"} Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.100639 4832 scope.go:117] "RemoveContainer" containerID="9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb" Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.139339 4832 scope.go:117] "RemoveContainer" containerID="aca1e6b76ee753acbb89ffba6aaaa67578f0827137cc98979c412e7fbc97433b" Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.151122 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mpp8v"] Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.172935 4832 scope.go:117] "RemoveContainer" containerID="3a9a857cc43d3a0de188d3e227d1f148ba77248324251df1ae2874498bc1dc24" Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.174947 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mpp8v"] Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.249699 4832 scope.go:117] "RemoveContainer" containerID="9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb" Dec 04 07:21:47 crc kubenswrapper[4832]: E1204 07:21:47.250372 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb\": container with ID starting with 9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb not found: ID does not exist" containerID="9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb" Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.250423 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb"} err="failed to get container status \"9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb\": rpc error: code = NotFound desc = could not find container \"9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb\": container with ID starting with 9413e721f2c2dbeac3a03d11d31f27585e509e31495ccf513a8aff97e86576fb not found: ID does not exist" Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.250453 4832 scope.go:117] "RemoveContainer" containerID="aca1e6b76ee753acbb89ffba6aaaa67578f0827137cc98979c412e7fbc97433b" Dec 04 07:21:47 crc kubenswrapper[4832]: E1204 07:21:47.251067 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca1e6b76ee753acbb89ffba6aaaa67578f0827137cc98979c412e7fbc97433b\": container with ID starting with aca1e6b76ee753acbb89ffba6aaaa67578f0827137cc98979c412e7fbc97433b not found: ID does not exist" containerID="aca1e6b76ee753acbb89ffba6aaaa67578f0827137cc98979c412e7fbc97433b" Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.251092 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca1e6b76ee753acbb89ffba6aaaa67578f0827137cc98979c412e7fbc97433b"} err="failed to get container status \"aca1e6b76ee753acbb89ffba6aaaa67578f0827137cc98979c412e7fbc97433b\": rpc error: code = NotFound desc = could not find container \"aca1e6b76ee753acbb89ffba6aaaa67578f0827137cc98979c412e7fbc97433b\": container with ID starting with aca1e6b76ee753acbb89ffba6aaaa67578f0827137cc98979c412e7fbc97433b not found: ID does not exist" Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.251106 4832 scope.go:117] "RemoveContainer" containerID="3a9a857cc43d3a0de188d3e227d1f148ba77248324251df1ae2874498bc1dc24" Dec 04 07:21:47 crc kubenswrapper[4832]: E1204 07:21:47.251810 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a9a857cc43d3a0de188d3e227d1f148ba77248324251df1ae2874498bc1dc24\": container with ID starting with 3a9a857cc43d3a0de188d3e227d1f148ba77248324251df1ae2874498bc1dc24 not found: ID does not exist" containerID="3a9a857cc43d3a0de188d3e227d1f148ba77248324251df1ae2874498bc1dc24" Dec 04 07:21:47 crc kubenswrapper[4832]: I1204 07:21:47.251839 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9a857cc43d3a0de188d3e227d1f148ba77248324251df1ae2874498bc1dc24"} err="failed to get container status \"3a9a857cc43d3a0de188d3e227d1f148ba77248324251df1ae2874498bc1dc24\": rpc error: code = NotFound desc = could not find container \"3a9a857cc43d3a0de188d3e227d1f148ba77248324251df1ae2874498bc1dc24\": container with ID starting with 3a9a857cc43d3a0de188d3e227d1f148ba77248324251df1ae2874498bc1dc24 not found: ID does not exist" Dec 04 07:21:48 crc kubenswrapper[4832]: I1204 07:21:48.722252 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714" path="/var/lib/kubelet/pods/ec5f4152-4f33-4b4b-bfbd-e8d4b0f4b714/volumes" Dec 04 07:21:58 crc kubenswrapper[4832]: I1204 07:21:58.710395 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:21:58 crc kubenswrapper[4832]: E1204 07:21:58.711351 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:22:10 crc kubenswrapper[4832]: I1204 07:22:10.710889 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:22:10 crc kubenswrapper[4832]: E1204 07:22:10.711712 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:22:22 crc kubenswrapper[4832]: I1204 07:22:22.801372 4832 scope.go:117] "RemoveContainer" containerID="280e01b3bfffe1f5d3a9782d0ab3510dda74775c4030bcba7ccaf6bed258b440" Dec 04 07:22:23 crc kubenswrapper[4832]: I1204 07:22:23.711647 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:22:23 crc kubenswrapper[4832]: E1204 07:22:23.712439 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:22:36 crc kubenswrapper[4832]: I1204 07:22:36.710176 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:22:36 crc kubenswrapper[4832]: E1204 07:22:36.711150 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:22:47 crc kubenswrapper[4832]: I1204 07:22:47.710154 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:22:47 crc kubenswrapper[4832]: E1204 07:22:47.710952 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:23:01 crc kubenswrapper[4832]: I1204 07:23:01.584712 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-8675b9cf45-xl2pz" podUID="6b7c2a80-b3fe-4243-9ea6-19e34f132a16" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 04 07:23:02 crc kubenswrapper[4832]: I1204 07:23:02.710832 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:23:02 crc kubenswrapper[4832]: E1204 07:23:02.711734 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:23:14 crc kubenswrapper[4832]: I1204 07:23:14.721266 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:23:14 crc kubenswrapper[4832]: E1204 07:23:14.722871 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:23:29 crc kubenswrapper[4832]: I1204 07:23:29.710812 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:23:29 crc kubenswrapper[4832]: E1204 07:23:29.711544 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:23:43 crc kubenswrapper[4832]: I1204 07:23:43.712876 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:23:43 crc kubenswrapper[4832]: E1204 07:23:43.713829 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:23:57 crc kubenswrapper[4832]: I1204 07:23:57.710685 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:23:57 crc kubenswrapper[4832]: E1204 07:23:57.711981 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:24:09 crc kubenswrapper[4832]: I1204 07:24:09.710974 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:24:09 crc kubenswrapper[4832]: E1204 07:24:09.712977 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:24:21 crc kubenswrapper[4832]: I1204 07:24:21.710475 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:24:21 crc kubenswrapper[4832]: E1204 07:24:21.711246 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c" Dec 04 07:24:35 crc kubenswrapper[4832]: I1204 07:24:35.711561 4832 scope.go:117] "RemoveContainer" containerID="61996751a405b014dffa97f3feac80c0715381ad1c9fc0810b2a06b44473a830" Dec 04 07:24:35 crc kubenswrapper[4832]: E1204 07:24:35.712452 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jl6q4_openshift-machine-config-operator(4079cbc8-9860-412d-8bb8-37713e677d1c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jl6q4" podUID="4079cbc8-9860-412d-8bb8-37713e677d1c"